id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
27010250
pes2o/s2orc
v3-fos-license
Performance of broiler chickens fed South African sorghum-based diets with xylanase M. Mabelebele, R.M. Gous, M. Siwela, H.V.M. O’Neil and P.A. Iji 1 University of South Africa, College of Agricultural and Environmental Science, Florida Campus, Rooderpoort, Johannesburg, South Africa 2 University of Kwa-Zulu Natal, School of Rural, Earth and Environmental Science, Scottsville, Pietermaritzburg, KwaZulu Natal, South Africa 3 AB Vista Feed Ingredients, Marlborough, Wiltshire, SN8 4AN, United Kingdom 4 University of New England, School of Environmental and Rural Science, Armidale, NSW 2351, Australia Introduction Sorghum-based diets are associated with inferior broiler performance in comparison to maize-and wheat-based diets (Selle et al., 2010).This inferiority has been observed in terms of breast meat yield and feed efficiency, which are crucial in broiler production.The purpose of adding enzymes to poultry feeds is mainly to improve the utilisation of nutrients.As described by Bedford and Schulze (1998), the mechanisms of xylanase include the degradation of the non-starch polysaccharides (NSP) in the cell wall matrix of the ingredients with the release of the encapsulated nutrients and lowered viscosity of digesta caused by soluble NSP and improved rate of diffusion between enzymes and digestion end products.Furthermore, xylanase increases accessibility of nutrients to endogenous digestive enzymes, stimulation of intestinal motility, improved feed passage rate, and supplementation of the enzyme capacity of young chicks.Xylanases, βglucanases, and phytases are enzymes commonly used in poultry feeds.Sorghum is an important cereal crop and plays a key role in animal feed (Kaufman et al., 2013).Although sorghum is similar to maize in chemical composition, it has been associated with sub-optimal and inconsistent poultry performance (Black et al., 2005;Bryden et al., 2009).Some sorghum varieties may contain condensed tannin, which has pronounced anti-nutritive properties.Grain sorghum contains high levels of phytate or phytic acid (Dorhorty et al., 1982).In addition to chelating minerals, phytate binds with protein through binary and ternary complexes and binds with starch directly or indirectly through starch granule-associated protein (Baldwin, 2001;Oatway et al., 2001).Due to this relationship, the enzymatic degradation of phytate increases availability of starch and protein in the sorghum. The use of phytase in poultry feed has also increased in response to increasing concerns over phosphorus (P) pollution in the environment.The hydrolysis of the phytate increases the availability of phytate-bound P (phytate-P) and reduces its excretion (Simons et al., 1990).It also increases protein and starch utilisation (Selle et al., 2010).Combinations of xylanase and phytase have been of great interest in wheat-based diets.Xylanase reduces the digesta viscosity and releases the nutrients that are entrapped in the cell wall matrix.Enhanced apparent metabolisable energy (AME) and improved protein digestibility in wheat-based diets supplemented with xylanase and phytase have been reported (Ravindran et al., 1999).Despite the popularity of the use of phytase and carbohydrases or their combinations in wheat-and barleybased diets, published data on their use in sorghum is very limited.Consequently, two sub-experiments were conducted to evaluate the impact of exogenous enzymes added to South African sorghum-based diets on growth performance, internal organs, and nutrient utilisation of broiler chickens. Materials and methods A total of 480 day-old sexed Ross 308 broiler chicks (40.10 g weight), obtained from National Chicks (Pietermaritzburg, Kwa-Zulu Natal, South Africa), and were used in Experiments 1A and 1B.Chicks according to their experiments were assigned to cages (42 × 75 × 25 cm) in four-tier battery brooders housed in an environmentally controlled room.Initial brooding temperature was 33 °C, but was gradually reduced to 24 ± 1 °C at 19 days of age.The first day, 24 hours of light was provided, after 18 hours of lighting and 6 hours darkness was maintained each day.Access to feed and water was ad libitum.The Animal Ethics Committee of the University of KwaZulu-Natal, UKZN, Pietermaritzburg, KwaZulu-Natal, South Africa approved the experiment, with approval number 016/14/Animal. Three commercial hybrid sorghum varieties were selected for their outstanding yield performance, agronomic characteristics, preference by farmers and tannin contents.The sorghum varieties, Pan8816 (non-tannin variety, 0.1 mg of catechin equivalent /100 mg), Pan8906 (non-tannin variety, 0.2 mg of catechin equivalent /100 mg) and Pan8625 (tannin variety, 2.0 mg of catechin equivalent/100mg) were grown in a controlled field trial at Pannar Research Services (Pty) Ltd, Klerksdorp, South Africa of November 2013 to February 2014.The mean temperature was 27.3 °C and rainfall of 527 mm (South African Weather Services, http:/www.weathers.co.za).The grains were field-dried and harvested at less than 14 % moisture content.The chemical and nutritional properties of the selected varieties were considered and reported by Mabelebele et al. (2015). The basal starter diet was formulated to meet the requirements as recommended in the Ross broiler specifications (Aviagen, 2014).The diets were supplemented with 500 FTU microbial phytase (Quantum Blue, AB Vista Feed Ingredients Marlborough, UK) and with or without 1600 BXU of xylanase (Econase XT, AB Vista Feed Ingredients Marlborough, UK) for a total of six experimental diets (Table 1).PAN8816 plus phytase with or without 1600 BXU/kg xylanase, PAN8625 with 500 FTU/kg phytase with or without 1600 BXU/kg xylanase and PAN8906 with 500 FTU/kg phytase with or without 1600 BXU/kg xylanase).The diets were fed as mash to male and female broiler chickens from one to 21 days of age.For experiment 1A each treatment was randomly assigned to 24 pens in a 2 (sex) × 3 (sorghum variety) × 2 (with or without 1600 BXU xylanase/kg diet) factorial arrangement in a completely randomized design with 10 chickens per pen.Feed intake and body weight measurements were taken weekly and used to calculate growth performance.The mortality was recorded as it occurred.At 21 days of age, two chickens per pen with an average mean pen weight was taken, weighed and slaughtered.The carcass traits and visceral organs were weighed and dressed weight determined.The carcass was subsequently weighed, followed by removal and weighing of breast, thighs and drumsticks, all bone-in with skin. For experiment 1B a total of 108 female broiler chickens were randomly assigned to 18 pens to a 3 (sorghum variety) x 2 (with or without xylanase) factorial arrangement in a completely randomised design with 6 chickens per replicate.The diets were formulated similar to that of experiment 1A.Celite, a major component of acid-insoluble ash, was included in the diet as an inert marker.Initial and final feed intake and body weight for an experimental period of four days was measured.On day 25, all birds were euthanazed by intravenous injection of sodium pentobarbitone, digesta contents from the distal half of the ileum were collected and processed as described previously (Ravindran et al., 1999a). The nitrogen (N) contents of diet and ileal digesta samples were analysed using a FP-428 nitrogen determinator (LECO® Corporation, St. Joseph, Michigan, USA) as described by Sweeney (1989).Nitrogen freed by combustion at high temperature in pure oxygen was measured with a thermal conductivity detector using helium as a reference and converted to crude protein using a numerical factor of 5.70.The furnace temperature was maintained at 950 °C for hydrolysis of samples in ultra-high purity oxygen.To interpret detector response as percentage nitrogen (w/w), calibration was carried out using a pure primary standard of ethylenediaminetetraacetic acid (EDTA).Diet and ileal digesta samples were also analysed for phosphorus by the inductively coupled plasma (ICP) method (Vista MPX-radial) as described by Anderson and Henderson (1986).Acid insoluble ash was determined using the method described by Vogtmann et al. (1975).A 5 g (feed) or 4 g sample (excreta) was placed in a previously weighed glass beaker to which 50 ml of 4-N HCl added.The beaker was covered with a watch glass and boiled gently for 45 min.The slurry was then filtered through ashless filter paper and washed twice with double-distilled water.The filter paper containing the washed residue was placed in a dried, pre-weighed crucible and dried for 24 h at 70 °C.The dried residue was then ashed at 600 °C for at least 4 h, allowed to cool in a desiccator, and weighed to determine the weight of the cooled ash (acid insoluble ash).The crude protein content was calculated by multiplying it with a factor of 6.25.Starch and fat was measured following the standard method of AOAC (2005).The apparent ileal protein digestibility coefficients were calculated using the below formula: Where: (NT/AIA) d = ratio of nutrient and acid insoluble ash in diet, and (NT/AIA) i = ratio of nutrient and acid insoluble ash in ileal digesta. Results and discussion The influence of xylanase on sorghum-based broiler diets on gross performance at 1 -21 days is presented in Table 2.The broiler chickens offered sorghum variety Pan8816 and Pan8906 had similar (P >0.05) feed intake, body weight and FCR from 1 to 21 days of age.The tannin-containing sorghum variety evaluated in the current study Pan8625 resulted in poor feed intake, weight gain and FCR in the period 1-7 days.These results contradict those reported by Nyachoti et al. (1996) who found that broiler chickens on high-tannin sorghum-based diets had higher feed intake and feed efficiency.These researchers went further to suggest that the effects of tannins are related to their astringency, an effect resulting from binding of salivary proteins to cause dryness in the mouth.However, as taste acuity in the chicken is not well developed, it seems unlikely that taste plays a role in decreasing feed intake.Enzyme inclusion did not improve (P >0.05) the feed intake, body weight and FCR at all ages.It was apparent that enzyme supplementation had a more pronounced effect during the early phase of the feeding trial.A study by Selle et al. (2010) reported an increase in feed intake and weight gain but depressed feed efficiency when xylanase was added to sorghum-based broiler diets.This may be due to the fact that sorghum is a 'non-viscous' grain with only 4% soluble NSP (Choct, 2006;Selle et al., 2010).Furthermore, the disruption of insoluble NSP in sorghum endosperm cell walls by NSP-degrading enzymes is considered to be limited, which is attributed to the extent of arabinose substitution and high levels of glucuronic acid in sorghum arabinoxylan (Taylor, 2005).Ibrahim et al. (2012) indicated that inclusion of the β-glucanase enzyme in sorghum-based diets significantly decreased total feed intake and significantly improved weight gain and the FCR of broiler chickens.No treatment interaction was observed in all production parameters throughout the experimental period.Male broiler chickens had higher weight gain (7.46%), feed intake (11.13%), and feed efficiency (9.64%) than females over 1 -21 days of age.Several authors have also stated that male chickens, irrespective of the strain, are superior in live and carcass weights compared to females (Gous et al., 1999;Scheuermann et al., 2003;Abudulla et al., 2010).The differences observed between male and female chickens may be the result of sexual dimorphism, which tends to favour males over females in poultry (Ilori et al., 2010;Peters et al., 2010). Similar (P >0.05) carcass weights, breast, thighs and drumsticks were observed in chickens on all sorghum varieties at 21 days of age (Table 3).The addition of Xylanase into diet did not improve (P >0.05) meat parts yield of broiler chickens aged 21 days.These results are in agreement with the report by Elnagar and Abdel-Wareth (2014) who indicated that there was no sorghum and enzyme interaction on carcass weights and meat cut-up parts of broiler chickens aged 21 days.The effect of microbial enzyme and sorghum variety on relative organ weights is presented in Table 4.The small intestine was heavier (P <0.05) in broiler chickens fed a tannin-containing variety, Pan8625, the variables others did not chance.The addition of xylanase were not affected (P >0.05).Sex had no effect (P >0.05) on relative weights of organs measured except for the bursa where females had heavier (P <0.05) weights than males.No treatment interaction was observed.Similar to the results of the current study Elgnagar and Abdel-Wareth (2014) reported that enzyme supplementation had no influence on organ weights.Nyachotti et al. (1996) indicated an influence of sorghum tannin on small intestinal development, resulting in low intestinal weights at 21 days and suggested that this could be due to the coarse nature of the diet. 5. Crude protein digestibility was no affected (P>0.05) by neither variety nor xylanase addition.Similar (P >0.05) crude protein digestibility were observed in all diets regardless of sorghum varieties or xylanase supplementation.It was expected that a tannin-containing variety without xylanase would be poorly digested, although it is not clear how xylanase acted in the presence of tannin in this variety to improve its digestibility.Dicko et al. (2006) It is uncertain why xylanase did not improve fat digestibility in the current study.Significant (P <0.05) interactions between the treatments were observed.Xylanase supplemented to Pan8906 showed better (P<0.05)starch digestibility than Pan8816 without xylanase additions.However, when xylanase was included in all sorghum varieties, numerically, starch digestibility tended to improve.There was no significant (P >0.05) treatment interaction observed.Similar phosphorus (P >0.05) digestibility values in all three sorghum varieties with or without xylanase supplementations were observed.Taylor (2005) went further to say that when the protein matrix and bodies are poorly digested, starch digestibility could be affected and digestibility of the two nutrients appear to be highly correlated.The current study is similar to that of Perez-Maldonado and Rodrigues (2009) in that different sorghum cultivars were used, and starch and protein digestibility were found to be highly correlated. Conclusion Sorghum variety affected the growth performance of broiler chickens.Moreover, nutrient digestibility was affected by the differences in variety.However, there were some observed treatment effects in the early life of broilers chickens offered sorghum-based diets.Xylanase supplementation did not improve the performance and nutrient digestibility of broiler chickens.In the current study, the sorghum was fed as mash and, therefore, there might be a need to investigate and explore nutritive value of sorghum fed in different forms. 2 Small intestines with digesta a,b,c Mean values in a column not sharing the same superscript are significantly different (P <0.05).SEM: Standard error of the meanThe nutrient digestibility of broiler chickens offered sorghum-based diets with added xylanase is shown in Table Table 1 Composition and calculated analysis of the starter diet Table 2 Influence of xylanase inclusions in sorghum broiler-based diets on feed intake (FI, g/bird), body weight gain (gain, g/bird), and feed conversion ratio (FCR, g: g, FI: BWG) Table 3 Effect of sorghum variety and xylanase supplementation carcass weight and parts yield of chickens (21d) Table 4 Effect of sorghum variety and xylanase supplementation on broiler organ weights (g/100gBW) at 21 days 1 Proventriculus and gizzard weight with digesta; Cowieson et al. (2006)t have shown that protein, proteincarbohydrate, protein-polyphenol, and carbohydrate-polyphenol interactions are the main factors affecting According toCowieson et al. (2006), it is well documented that xylanase-based enzymes act on NSP through two main modes of action. Table 5 Nutrient digestibility (%) of broiler chickens offered sorghum-based diets added with microbial enzymes a,b,c Mean values in a column not sharing the same superscript are significantly different (P <0.05).SEM: Standard error of the mean Enzyme inclusion significantly increased (P <0.05) the crude protein digestibility.A treatment interaction was observed (P <0.0001) because numerically crude protein digestibility tended to increase when xylanase was added.Sorghum variety Pan8906 with xylanase supplementation had lower fat digestibility (P>0.05)than the same variety without enzyme inclusion.Sorghum varieties with xylanase had lower (P>0.05)fat digestibilities compared to those without xylanase supplementation.Sorghum Pan8816 and Pan8625 offered to broiler chickens significantly improved (P <0.05) fat digestibility (75.93 and 76.60%, respectively) compared to when Pan8906 (65.94%) was offered to broiler chickens.Xylanase inclusions significantly yielded low (P>0.05)fat digestibility.
2017-10-11T00:24:48.561Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "4907e9edd8dd1092f5f0f58326d0394b3f3dc343", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/sajas/article/download/161432/150999", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4907e9edd8dd1092f5f0f58326d0394b3f3dc343", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
271569410
pes2o/s2orc
v3-fos-license
Metal exposure in the Greenlandic ACCEPT cohort: follow-up and comparison with other Arctic populations ABSTRACT Humans are exposed to metals through diet and lifestyle e.g. smoking. Some metals are essential for physiologically body functions, while others are non-essential and can be toxic to humans. This study follows up on metal concentrations in the Greenlandic ACCEPT birth-cohort (mothers and fathers) and compares with other Arctic populations. The data from 2019 to 2020 include blood metal concentrations, lifestyle and food frequency questionnaires from 101 mothers and 76 fathers, 24–55 years, living in Nuuk, Sisimiut, and Ilulissat. A high percentage (25–45%) exceeded international guidance values for Hg. For the mothers, the metal concentrations changed significantly from inclusion at pregnancy to this follow-up 3–5 years after birth; some increased and others decreased. Most metals differed significantly between mothers and fathers, while few also differed between residential towns. Several metals correlated significantly with marine food intake and socio-economic factors, but the direction of the correlations varied. Traditional marine food intake was associated positively with Se, As and Hg. To the best of our knowledge, this study provides the most recent data on metal exposure of both men and women in Greenland, elucidating metal exposure sources among Arctic populations, and documents the need for continuing biomonitoring to follow the exceeding of guidance values for Hg. *Hg was analyzed using DMA-80 and the number of replicates analysed for Hg in Seronorm Whole Blood L-2 was 23.LOD: Limit of detection; LOQ: Limit of quantification; SD: standard deviation.WB-Se: Whole blood Selenium; P-Se: plasma Selenium.Value >=LOD but <LOQ were included in the analyses as our QA/QC result documented that we were able to obtain an accuracy within 100±20% for values below or close to the LOQ based on 6 replicates.Table S3: Principal Component analysis (PCA) Major loading factors for each metal are in bold.The metals were measured in whole blood, whereas selenium was also measured in plasma.PC: Principal component; KMO: Kaiser-Meyer-Olkin; WB-Se: Whole blood Selenium; P-Se: plasma Selenium.Educational level (Primary School, High School, Technical college, and University), 2 Personal income/ 3 Household Income (<100.000,100.000-250.000,and >250.000DKK/year), 4 Current alcohol intake (0 drinks/week, 1-7 drinks/week, and ≥8 drinks/week).For personal income, household income, and alcohol intake the answer possibility "Don't know" was omitted in the analysis.n: number of participants included in the analyses.Yellow cells indicate substantial difference between the pooled analyses and these stratified analyses or differences between fathers and mothers (rs is outside the 95% CI of the other analyses).Some variables were categorical with the following categories: 1 Educational level (Primary School, High School, Technical college, and University), 2 Personal income/ 3 Household Income (<100.000,100.000-250.000,and >250.000DKK/year), 4 Current alcohol intake (0 drinks/week, 1-7 drinks/week, and ≥8 drinks/week), 5 Smoking history (Never, Former, and Current).For personal income, household income, and alcohol intake the answer possibility "Don't know" was omitted in the analysis.PC: principal components; PC-1 strongly correlated with As, Hg and Se (whole blood and plasma): PC-2 strongly correlated with Ca, Cu, Fe, Mg, and Zn.Linear regression analyses with metal as dependent variable and food intake (times per month) as independent variable.The analyses were performed with ln transformed variables.Bold values and * indicate a significant correlation (p<0.050), while # indicate a borderline significant correlation (p: ≥0.050 and <0.080).The metals were measured in whole blood (WB), whereas selenium was also measured in plasma (P).WB-Se: Whole blood Selenium; P-Se: plasma Selenium.n: number of participants included in the analyses; β (95%CI): Beta coefficients from the linear regression analyses with 95% confidence interval.PC: principal components; PC-1 strongly correlated with As, Hg and Se (whole blood and plasma): PC-2 strongly correlated with Ca, Cu, Fe, Mg, and Zn. : Unadjusted associations between metal concentrations and intake of imported food groups (times per month) Linear regression analyses with metal as dependent variable and food intake (times per month) as independent variable.The analyses were performed with ln transformed variables.Bold values and * indicate a significant correlation (p<0.050), while # indicate a borderline significant correlation (p: ≥0.050 and <0.080).The metals were measured in whole blood (WB), whereas selenium was also measured in plasma (P).WB-Se: Whole blood Selenium; P-Se: plasma Selenium.n: number of participants included in the analyses; β (95%CI): Beta coefficients from the linear regression analyses with 95% confidence interval.PC: principal components; PC-1 strongly correlated with As, Hg and Se (whole blood and plasma): PC-2 strongly correlated with Ca, Cu, Fe, Mg, and Zn. Table S4 : Characteristics of the study population Table S5 : Concentrations of metals in fathers and mothers Differences between towns were tested with one-way ANOVA on ln transformed variables, and Tukey HSD post hoc test were used to test the specific differences between towns.1Theadjusted ANCOVA analyses were adjusted for age and sex.Bold p-values and * indicate significant difference (p<0.050);The metals were measured in whole blood (WB), whereas selenium was also measured in plasma (P).WB-Se: Whole blood Selenium; P-Se: plasma Selenium.N: total number of participant in the group; SD: Standard Deviation; IQR: Interquartile range; PC: principal components; PC-1 strongly correlated with As, Hg and Se (whole blood and plasma): PC-2 strongly correlated with Ca, Cu, Fe, Mg, and Zn. Table S7 : Spearman correlations between Cd concentrations (µg/L) and lifestyle and socioeconomic factors by smoking history Table S8 : Spearman correlations between metal concentrations and lifestyle and socioeconomic factors for fathers and mothers Table S10 Table S : Unadjusted associations between metal concentrations and intake of traditional food groups (times per month)
2024-08-01T06:16:31.830Z
2024-07-30T00:00:00.000
{ "year": 2024, "sha1": "4a927767bda5cc8de3e3aa67c59ca7afa78b1711", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "62588f3abc8dbbf33e804885a2f6becfefb2891c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
41980441
pes2o/s2orc
v3-fos-license
Does nondiabetic renal disease exacerbate diabetic nephropathy in patients with type 2 diabetes? See Article on Page 565-572 Diabetic nephropathy (DN) is one of the major complications of diabetes mellitus (DM). It is estimated that 20% to 40% of DM patients will develop a diabetic renal disease. Currently, DN is the leading cause of end-stage renal disease worldwide, including in Korea, and has become a serious economic burden to the healthcare system in Korea [1]. However, renal diseases other than DN, a heterogeneous group of renal lesions, also occur frequently in patients with diabetes. Renal biopsy is considered to be a major part of the clinical practice of nephrology, because the information it provides is critical in making a specific diagnosis and decisions for patient management and for the evaluation of disease activity and prognosis [2]. Due to its invasiveness, however, renal biopsy is not routinely performed in diabetic patients presenting with proteinuria alone. Thus, the diagnosis of DN is almost always based on clinical findings and supported by persistent proteinuria without hematuria, hypertension, and progressive decline in renal function. The validity of this clinical approach is well-established in type 1 diabetes, but not in those with type 2 diabetes [3]. It is not uncommon for patients with a 7- to 10-year history of type 1 DM to have demonstrated diabetic retinopathy (DR) and a history of microalbuminuria. These patients present no evidence of sudden-onset marked proteinuria, hematuria, abnormal kidney size, or other renal disease [4-7]. Unfortunately, most of our knowledge of DN in type 2 diabetes patients is derived from studies of patients with type 1 DM [2,3]. Furthermore, nondiabetic renal diseases (NDRD), either isolated or superimposed on an underlying DN, have been reported, and the prevalence of biopsy-proven NDRD in type 2 patients varies from 10% to 85% [8-10]. It is well-known and generally accepted that it is difficult to reverse DN, whereas some cases of NDRD are readily treatable and remittable with appropriate treatment. However, DN and NDRD coexist in some diabetic patients. In summary, it is important to distinguish NDRD from DN and to identify features that discriminate between NDRD and DN, because this could assist clinicians in making a rapid and appropriate diagnosis, resulting in more effective management. In this issue of The Korean Journal of Internal Medicine, Byun et al. [11] found that shorter duration of diabetes, higher hemoglobin A1c (HbA1c), and the absence of DR are independent predictors of NDRD, and that NDRD is associated with better renal outcomes with specific treatment, such as steroids and immunosuppressants. Additionally, patients with a history of hematuria were more likely to have NDRD. A large meta-analysis study showed that clinical predictors allowing discrimination of NDRD from DN were: 1) absence of DR, 2) shorter DM duration, 3) lower HbA1c, and 4) lower blood pressure. They also found no difference in age, 24-hour urinary protein excretion, serum creatinine, glomerular filtration rate, or blood urea nitrogen concentrations in patients with NDRD and DN [12]. The results of that study are quite similar to those of the authors [13] with the exception of blood pressure. Byun et al. [11] also found that immunoglobulin A (IgA) nephropathy was the most common lesion, followed by membranous nephropathy, crescentic glomerulonephritis, and tubulointerstitial nephritis, in order of frequency. In contrast, a report from Malaysia showed that the causes of NDRD in type 2 diabetes, in decreasing order of frequency, were acute interstitial nephritis, glomerulonephritis, hypertensive renal disease, and acute tubular necrosis [14]. A recent study from China reported that IgA nephropathy was the most common NDRD, followed by tubulointerstitial lesion, membrano-proliferative glomerulonephritis, and membranous nephropathy [12]. The high prevalence of IgA nephropathy as the most common NDRD in type 2 diabetes patients in these two studies is consistent with their geographic distribution, in which IgA nephropathy is the most common glomerulonephritis in the general population. This reflects the prevalence patterns of glomerular disease in adults in the general population [15,16]. From the results of these studies, we may cautiously conclude that IgA nephropathy is the most common NDRD in East Asian diabetic patients. Another interesting finding of Byun et al. [11] was that the rate of decline in renal function was faster in patients with DN compared with those with NDRD alone or with DN. The authors' explanation was that patients with DN had a shorter duration and lower degree of severity of diabetes and a higher incidence of potentially treatable renal diseases, such as IgA nephropathy and membranous nephropathy. These findings are consistent with a previous report demonstrating that a shorter duration of diabetes and the presence of potentially treatable NDRD had a favorable prognosis in type 2 diabetes [14]. Several studies have suggested that there are distinct clinical and pathological features in diabetic patients with DN complicating NDRD. These patients have some of the clinical and pathological features of DN, which include a high prevalence of DR, a long duration of diabetes, poor glycemic control, and a lack of history of hematuria [8-12]. As some cases of NDRD are remittable and, in some cases, treatable if correctly intercepted, leading to completely different renal outcomes, the importance of accurate diagnosis cannot be understated. It is well-known that the only way to distinguish NDRD from DN is renal histology. However, the prevalences of NDRD are not uniform, which is likely to be due to differences in study populations and/or selection criteria. Thus, larger, multicenter, randomized, prospective studies are needed to confirm these preliminary findings. There is an urgent need to identify features that can discriminate between NDRD and DN; this could provide clinicians with more objective, reliable, and safe diagnoses, leading to more effective medical management. Diabetic nephropathy (DN) is one of the major complications of diabetes mellitus (DM). It is estimated that 20% to 40% of DM patients will develop a diabetic renal disease. Currently, DN is the leading cause of end-stage renal disease worldwide, including in Korea, and has become a serious economic burden to the healthcare system in Korea [1]. However, renal diseases other than DN, a heterogeneous group of renal lesions, also occur frequently in patients with diabetes. Renal biopsy is considered to be a major part of the clinical practice of nephrology, because the information it provides is critical in making a specific diagnosis and decisions for patient management and for the evaluation of disease activity and prognosis [2]. Due to its invasiveness, however, renal biopsy is not routinely performed in diabetic patients presenting with proteinuria alone. Thus, the diagnosis of DN is almost always based on clinical findings and supported by persistent proteinuria without hematuria, hypertension, and progressive decline in renal function. The validity of this clinical approach is well-established in type 1 diabetes, but not in those with type 2 diabetes [3]. It is not uncommon for patients with a 7-to 10-year history of type 1 DM to have demonstrated diabetic retinopathy (DR) and a history of microalbuminuria. These patients present no ev idence of sudden-onset marked proteinuria, hematuria, abnormal kidney size, or other renal disease [4][5][6][7]. Unfortunately, most of our knowledge of DN in type 2 diabetes patients is derived from studies of patients with type 1 DM [2,3]. Furthermore, nondiabetic renal diseases (NDRD), either isolated or superimposed on an underlying DN, have been reported, and the prevalence of biopsy-proven NDRD in type 2 patients varies from 10% to 85% [8][9][10]. It is well-known and generally accepted that it is difficult to reverse DN, whereas some cases of NDRD are readily treatable and remittable with appropriate treatment. However, DN and NDRD coexist in some diabetic patients. In summary, it is important to distinguish NDRD from DN and to identify features that discriminate between NDRD and DN, because this could assist clinicians in making a rapid and appropriate diagnosis, resulting in more effective management. In this issue of The Korean Journal of Internal Medicine, Byun et al. [11] found that shorter duration of diabetes, higher hemoglobin A1c (HbA1c), and the absence of DR are independent See Article on Page 565-572 Park CW. Nondiabetic renal diseases in type 2 diabetes predictors of NDRD, and that NDRD is associated with better renal outcomes with specific treatment, such as steroids and immunosuppressants. Additionally, patients with a history of hematuria were more likely to have NDRD. A large meta-analysis study showed that clinical predictors allowing discrimination of NDRD from DN were: 1) absence of DR, 2) shorter DM duration, 3) lower HbA1c, and 4) lower blood pressure. They also found no difference in age, 24-hour urinary protein excretion, serum creatinine, glomerular filtration rate, or blood urea nitrogen concentrations in patients with NDRD and DN [12]. The results of that study are quite similar to those of the authors [13] with the exception of blood pressure. Byun et al. [11] also found that immunoglobulin A (IgA) nephropathy was the most common lesion, followed by membranous nephropathy, crescentic glomerulonephritis, and tubulointerstitial nephritis, in order of frequency. In contrast, a report from Malaysia showed that the causes of NDRD in type 2 diabetes, in decreasing order of frequency, were acute interstitial nephritis, glomerulonephritis, hypertensive renal disease, and acute tubular necrosis [14]. A recent study from China reported that IgA nephropathy was the most common NDRD, followed by tubulointerstitial lesion, membrano-proliferative glomerulonephritis, and membranous nephropathy [12]. The high prevalence of IgA nephropathy as the most common NDRD in type 2 diabetes patients in these two studies is consistent with their geographic distribution, in which IgA nephropathy is the most common glomerulonephritis in the general population. This ref lects the prevalence patterns of glomerular disease in adults in the general population [15,16]. From the results of these studies, we may cautiously conclude that IgA nephropathy is the most common NDRD in East Asian diabetic patients. Another interesting finding of Byun et al. [11] was that the rate of decline in renal function was faster in patients with DN compared with those with NDRD alone or with DN. The authors' explanation was that patients with DN had a shorter duration and lower degree of severity of diabetes and a higher incidence of potentially treatable renal diseases, such as IgA nephropathy and membranous nephropathy. These f indings are consistent with a previous report demonstrating that a shorter duration of diabetes and the presence of po-tentially treatable NDRD had a favorable prognosis in type 2 diabetes [14]. Several studies have suggested that there are distinct clinical and pathological features in diabetic patients with DN complicating NDRD. These patients have some of the clinical and pathological features of DN, which include a high prevalence of DR, a long duration of diabetes, poor glycemic control, and a lack of history of hematuria [8][9][10][11][12]. As some cases of NDRD are remittable and, in some cases, treatable if correctly intercepted, leading to completely different renal outcomes, the importance of accurate diagnosis cannot be understated. It is well-known that the only way to distinguish NDRD from DN is renal histology. However, the prevalences of NDRD are not uniform, which is likely to be due to differences in study populations and/or selection criteria. Thus, larger, multicenter, randomized, prospective studies are needed to confirm these preliminary findings. There is an urgent need to identify features that can discriminate between NDRD and DN; this could provide clinicians with more objective, reliable, and safe diagnoses, leading to more effective medical management.
2016-05-12T22:15:10.714Z
2013-08-14T00:00:00.000
{ "year": 2013, "sha1": "6c9c9df0c1418a30ddaa06fa74365fbe4a1d3fe8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3904/kjim.2013.28.5.544", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c9c9df0c1418a30ddaa06fa74365fbe4a1d3fe8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251240801
pes2o/s2orc
v3-fos-license
Investigation on the Identity Construction of Young Foreign Language Teachers in Colleges and Universities Based on Feature Selection Algorithm With the gradual implementation of the strategic goal of strengthening the country in education, people’s attention to students’ learning is also increasing day by day. At the same time, the teaching eƒect of individual teachers depends to a large extent on the identity of teachers themselves, so it is necessary to improve the construction of teachers’ identity. Most of the research on teachers’ identity construction starts from the theory of identity, which does not play a signi†cant role in the critical application in practical teaching activities. In response to these problems, this paper will use the ReliefF algorithm and the Spark algorithm in the feature selection algorithm to scienti†cally process the identity construction, and implement the application of the improved ReliefF feature selection algorithm and the feature selection application steps based on the Spark algorithm respectively. e experimental results show that the improved ReliefF algorithm has better feature selection accuracy when the feature range is 30% to 40%. It shows that the construction of teacher identity based on feature selection algorithm can provide an objective basis for the realization of teacher identity. Introduction With the deepening of reform and opening up and the increasing emphasis on education and teaching, more and more people are paying attention to education and teaching. e reform of traditional education is also gradually being carried out in this environment. e purpose is to improve the learning e ciency of students in the classroom and the teaching e ciency of teachers. In this whole, in addition to the change of students' learning methods, the discussion of the integrity of teachers' own identity has also become a major breakthrough in the upgrading of classroom education. Teachers' construction of their own identity can deepen their understanding of individual values, which are a sense of identity for life. At the same time, the perfection of the teacher's identity construction, for the individual of this profession not only provides the internal development motivation but also requires the teacher himself to have the perception of the identity of this profession. It is necessary to achieve the goal of recognition of the identity of the teacher by others. In the discussion, there is no obvious correlation between the identity construction of teachers and the speci c methods used by teachers themselves, but more of a sound psychological state developed by teachers themselves. e importance of teachers' identity construction is thus highlighted. In order to conduct scienti c research on this phenomenon, this paper will discuss the feature selection algorithm. teachers [1]. Zacharias conducted research on the identity of English teachers through structuralism [2]. RDS Lima and his team studied the professional identity of the teaching profession constructed by different teachers in their teaching activities [3]. Teng studied the identity construction of preservice teachers, and the purpose was to study the influence of teachers' emotional changes in educational activities on teaching effect [4]. e results of Jim's research showed that the strength of expectations for teachers determined identity construction [5]. e research on teachers' identity construction has improved teachers' own cognitive structure. At the same time, the research content also includes research on the identity of native language teachers and nonnative language teachers. is provides a sufficient theoretical basis for the construction of different types of teacher identities. However, due to the lack of data and scientific method support, the practical application of research on identity construction will make its application more mere formality. It is necessary to introduce scientific research methods. Aiming at the problem that the mentioned research on identity construction is easy to become a formality, this paper will use the feature selection algorithm to carry out the corresponding research. ere have been a lot of research results on the application of this algorithm. Deniz and his team studied the application of genetic algorithms and machine algorithms to feature selection to solve some problems of binary classification [6]. Ramesh used the method of clustering to create features on the data and used different methods for the establishment of clusters to study this problem [7]. Arifused a feature selection algorithm to build a prediction system for student achievement [8]. Kalita and his team introduced a feature selection algorithm for intelligent water droplets and used a two-way limit value technique to determine the set of feature samples required for their experiments [9]. Mohammad and Alsmadi constructed a data mining detection system through feature selection, the purpose of which was to improve the accuracy of the data set [10]. ese studies include many machine learning applications, some of which are used to solve classification problems, and some are used to build prediction systems. ey are basically studies with many data samples. ey can continuously improve the operation of feature selection algorithms, but lack of research on more theoretical research objects. It makes the fields mainly based on theory lack the support of scientific algorithms. In this paper, the feature selection algorithm is applied to the identity construction of teachers, which enables machine learning algorithms to achieve a full range of applications. is paper adopts the feature selection algorithm in the research of teacher identity construction. e purpose of this project is to conduct more scientific research on the research object of identity construction, which is biased toward the theoretical nature, so as to better improve the teacher identity construction. In this paper, the feature selection ReliefF algorithm and Spark algorithm are used to achieve the research goal. e experiment and result analysis of teacher identity construction based on improved ReliefF algorithm in this paper. As a result, when the feature proportion of the sample is in the range of 30% and 40%, the value of AP is higher, indicating that the improved ReliefF algorithm has higher performance for the selection of samples with multiple features. e innovation of this paper is that (1) ReliefF algorithm and Spark algorithm are used in the feature selection algorithm for the identity construction of teachers, so that the research on identity construction is no longer only at the theoretical level. (2) Realizing the correlation between machine learning and the theoretical research of the research object provides more scientific algorithm support for the theoretical research object. Teacher Identity Construction Method Based on Feature Selection Algorithm 3.1. e Construction of Teacher Identity. e research on teacher identity has shown great necessity for the innovative development of education, because this research includes teachers' confidence in education and teaching and their corresponding characteristics of teaching behavior. At the same time, the construction of teacher's identity is also the inner driving force that helps to influence the profession of teachers, and provides more sufficient starting power for teachers' education and teaching to realize the deepening of teachers' beliefs. e research on teacher identity construction has changed from focusing on teaching skills and professional knowledge of teachers in the past to focusing on the identity of individual teachers and is constantly exploring the interior of this profession. Teacher Identity Construction. e first thing that needs to be done for the construction of teacher's identity is the identity construction of teacher's identity, and one point is developed from the nature of the profession that teachers are engaged in [11]. And it is highly consistent with the individual teachers' self-identification, but they are different from each other. e connotation of the identity construction of teacher identity is very extensive, and it is very important to the individual teacher, and the content of the identity construction of teacher identity contains several parts, and its structure can be illustrated in Figure 1: e content in Figure 1 is often used to distinguish the object of the teaching profession, which reflects the needs of the individual teacher's life. e construction of teachers' identity is often based on the professionalism of teachers. At the same time, it will be combined with the identity of the teacher's own individual identity, and the combination of the two will eventually form a highly condensed understanding of the teacher's own nature [12]. Constructed content is based on the complete development of individual teachers, which provides a better perspective and intrinsic motivation for profound changes in education and teacher education. Teacher's identity refers to the teacher's personality, breaking the "standard" identity in the gaze and imagination of others. e process of one's own teacher identity is confirmed from the cognition and reflection of "positioning as a teacher" and "what type of teacher to become" in one's own experience. Teachers' Professional Identity Construction and Its Influencing Factors. In its meaning, the construction of teacher identity includes the individual teacher's awareness of the teacher himself and the construction of the teacher's professional identity, and the two cannot be regarded as completely independent. As this kind of understanding of the former is based on deep thinking about the individual self, its content includes the construction of teachers' selfperception of their own specialties, abilities, knowledge, and values. e entry point for the identity construction of individual teachers will start from external attention to a greater extent [13] and continue to deepen from the outside to the inside, which can be represented by Figure 2: What is shown in Figure 2 is the identification of teachers' professional identity, which first includes the external identification of teachers' individual identity from the outside to the inside. is part belongs to social identity, and this part is the professionalism of teachers themselves that can be seen by the public for teachers themselves and for the teacher group. e teacher's inner self-construction is at the same level as the external individual's construction of the teacher himself. It has the experience of achieving professional control and being in the group through the perspective of the self and between the individual teachers and the group of teachers. It can be seen from this that the construction of teachers' professional identity has great uncertainty, and the content it contains is also diverse. Teachers' professional identity construction includes teachers' self-identity, professional identity, role identity, professional role identity, and other aspects. For the construction of the professional identity of teachers, the unique professionalism of teachers in teaching practice is involved. e factors that have an impact on the construction of teachers' professional identity can be explored from the two levels of schools and teachers themselves. e composition of its specific influencing factors is shown in Figure 3: e influencing factors in Figure 3 are mainly from two aspects: the teaching environment where the individual teacher is located and the individual teacher. e first factor in the former is the teaching environment where the teacher is located, which is also the classroom teaching environment. Among these factors, the teacher's control over the entire teaching classroom is the most important, which can have a long-term impact on the teacher. In addition to the abovementioned point, the influence of the teaching environment is the influence of the school environment on individual teachers, which includes the mutual influence between teachers and colleagues and student groups in the school. And the student group is very decisive for the construction of the teacher's professional identity, because the student group can show the teacher's teaching achievements well to establish the teacher's professional identity. e influence of the school environment also includes the influence of the teacher's discipline on the teacher himself. is factor is the main component of the teacher's professional identity and an entry point for the outside world to judge teachers [14]. Another major influencing factor is the teacher himself, which includes some characteristics of the teacher himself, and the influence of the events experienced by the teacher on the teacher's teaching. Introduction of Different Perspectives of Teacher Identity Construction. What this paper studies is the identity construction of foreign language teachers in colleges and universities, and the former part is the clarification of the connotation of teachers' identity construction. e specific angle for the construction of teacher identity is different, because the characteristics of foreign language teaching are based on language. erefore, the first aspect of identity construction is the professional skills of language, and the second is the knowledge of language majors, which also contain the language views of different teachers and the practicability of foreign language teaching. en there is the emotional tendency of individual teachers in the process of education and teaching, which is of great significance to the construction of teachers' identity. Because only the emotional orientation of teachers and their students are consistent, the results of teaching will be well manifested. For this point of view, a corresponding questionnaire is used to compare the three teachers. e results of the survey are shown in Table 1: It can be seen from Table 1 that teacher C's emotional orientation in teaching is very consistent with that of students. e office uses five different questions to design the questionnaire. e general content of the questions is that teachers will provide effective learning plans to all students. In the process of classroom teaching, teachers will use different styles of classroom activities to enhance students' interest in classroom learning. Teachers will encourage students who encounter learning difficulties in a timely manner. Teachers' arrangements for classroom content are diverse, as well as teachers' introduction to foreign cultures. Among the three teachers involved in the questionnaire, Teacher B has the shortest teaching age, so he has the least emotional training for students, and Teacher C has participated in teaching for 15 years, so he has the most profound emotional concern for students [15]. e mean value also shows that the three teachers of different teaching ages attach great importance to the grasp of students' emotional orientation. In addition, the learning methods and strategies adopted by individual teachers play a role in the construction of teacher identity, which is very important for individual teachers' learning and growth, and it is also very important to the group of students he leads. For the different learning methods adopted, teachers will mainly affect the students' learning methods of several foreign languages. Its specific ways include cognitive strategy, regulation strategy, and resource management strategy. A survey is conducted on the learning strategies that teachers implement on students. It is mainly divided into two parts: one is students' cognitive way of foreign language learning, and the other is students' regulation way of foreign language learning. e results of the survey are shown in Figure 4. e four factors of cognitive style in the findings in Figure 4 can be summarized as: e teacher establishes the connection in learning for the students through the common viewpoint of connection, and the teacher guides the students to summarize and think in the language learning. Teachers guide the learning methods used by students, point out the key points of the learning content, and guide students to make reasonable guesses during learning [8]. Next is the teachers' cultivation of students' foreign language culture. is perspective is also very important for the construction of teachers' identity, because as a foreign language teacher, only with a relatively high corresponding cultural literacy can he convey it to his students. A questionnaire survey is also used to examine the influence of the three teachers A, B, and C on their students' foreign language culture. e results are shown in Table 2: Table 2 shows the impact of cultural training on the identity construction of teachers. It can be seen from the table that teacher C pays the most attention to the training of students' culture, which may be related to teacher C's learning and teaching experience. Improved ReliefF Feature Selection Algorithm. Feature selection is the product of the mature development of modern computer science and technology, because with the rapid development of computers, corresponding to the generation of many data samples. Feature selection is to summarize the set of small data samples with certain characteristics contained in the large data samples obtained by the computer. e effect of this algorithm can speed up the process of machine learnings. At the same time, this class of algorithms removes features that are irrelevant and redundant for classification. It reduces data dimensionality, avoids dimensional disasters, and can speed up the operation efficiency of learning algorithms to improve the maximum efficiency of the algorithm. is paper constructs the identity of the teacher. e content of the identity construction has been explained in the above method, which involves many characteristics of identity construction. e ReliefF feature selection algorithm here will process the extracted features. is algorithms can well select representative identity features, so as to better perform identity construction. e detailed algorithm is introduced as follows. Feature Selection Principle and Classification. Feature selection is to obtain a small set of feature data samples relative to the data samples, and the application of this principle is mainly aimed at problems with classification properties. It roughly includes three steps: namely the generation of feature sample sets, evaluation, and verification of the performance of feature selection. e specific process is shown in Figure 5: Figure 5 shows the specific operation steps of feature selection. It can be seen from the process that the normal operation of feature selection is more dependent on the method of obtaining characteristic sample data and the criteria for making certain judgments on this characteristic sample set. e method derived for the former sample set is mainly a search method of the applied data. After obtaining the characteristic data sample, it needs to make a certain judgment, which contains two different algorithm modes [16], and its corresponding structure diagram is shown in Figure 6: Figure 6 contains two different feature selection methods. e schematic structure in Figure 6(a) is to read the features of the data samples first, and then pass the obtained feature samples to the algorithm to achieve the advance selection processing of the data. In the wrapped feature selection in Figure 6(b), the learning algorithm class is used to classify the characteristic samples so that the results will be relatively reliable. Improved ReliefF Feature Selection Algorithm. e difference between the improved Relieff feature selection algorithm and the traditional algorithm is that the former can select and extract multiple features. e advantage of this is that the various characteristics of the data samples can be classified and extracted to make the operation of the algorithm more efficient. Now it is supposed that there is a data sample A, in which the sample individual is represented by a i , and the feature number of the data sample individual q is represented by Q i (q) [17]. Assuming that the feature sample set that can be predicted by the feature selection is Q x , then the predicted value of the q feature can be expressed by this formula: e range of the prediction result in the formula is between 0 and 1, and the corresponding feature result is the probability of the feature in the measured data sample. H(M q t |N q C a x (q) ) in the formula can only be obtained through certain mathematical changes, and the conversion formula is as follows: And H(M q t ) in the formula can be solved by the data sample, and the specific calculation process is shown in the formula: In the formula, H(M q t ) represents the probability of the occurrence of the label q, b represents the smoothing parameter, and H(N b C a x (b) |M b t ) represents the probability after verification. e final sample prediction feature function can be expressed by the formula: , What this paper establishes is an algorithm mechanism for multiple features, and the evaluation mechanism used on a single feature selection algorithm is no longer suitable. Because in the traditional single-label learning field, metrics such as precision, precision, and recall are often used to measure model performance. However, in the multilabel classification problem, an instance can belong to multiple different categories, so it is necessary to construct a corresponding evaluation index for this problem to measure the performance of the multi-label learner. erefore, this paper adopts another evaluation mechanism [18]. Assuming that there is an existing data sample T, a single sample in it is represented by a i , and c is used to represent the number of features that a single sample has. e first is the evaluation of the relevant and predictive features obtained from the data samples, which can be expressed by the formula: In the formula, T i represents the set of related features in the sample, and rank f (•) is the function of sorting each prediction feature. e performance of this feature selection Mathematical Problems in Engineering evaluation algorithm is best when the predicted value of the formula is close to 1 [19]. e second judgment algorithm is the judgment of the loss of the prediction result, which is mainly used in the case of misclassification of a single sample feature, and its expression formula is as follows: Y i of the formula corresponds to a set of feature samples containing two features, D represents the specific eye sample feature, and m l (a i ) represents the output predicted feature value. e smaller the result of the operation, the better the algorithm mode of feature selection in this paper. When it is necessary to judge whether the feature at the beginning of the sample features belongs to the sample, it can be expressed by the formula: f k (q) in the formula represents the prediction result corresponding to the feature k of the sample individual, and this algorithm is applied to the situation where errors may occur in a class of features [20]. e smaller the value of the item, the better the performance of the algorithm. When evaluating the vertical coverage of the characteristics of the data sample, the formula can be used to evaluate the performance: e smaller the value of the operation result of the formula, the better the running state of the feature selection algorithm used in this paper. In order to determine whether there is a problem with the arrangement of sample features, the formula can be used to express: e formula is consistent with the performance in the algorithm (10), that is, the smaller the final value is, the better the algorithm in this paper is. For the calculation formula of the ReliefF feature selection algorithm, it can be assumed that there is a data set G, a i ∈ T h is the capacity of the feature of the sample, and h indicates that there are h features in the sample set. Its specific expression is as follows: e formula is the calculation process of the weighted value of the sample feature O, K represents the sample set similar to the sample feature, and L represents the different sample size. diff (a 1 , a 2 , a 3 ) in the formula represents the difference between a 2 and a 3 relative to feature a 1 . When the distance between the sample and similar samples is smaller, the performance of the feature selection algorithm in this paper is the best. Feature Selection Based on Spark Algorithm. e application of this algorithm is for the further improvement of the feature selection algorithm of ReliefF in 2.2, and the improvement method needs to refer to the principle of another ReliefF algorithm. Spark algorithm is used to solve multiway processing objects. After the above feature selection algorithm extracts and optimizes the feature elements of teacher identity, what is needed is to perform more comprehensive algorithm processing on the characteristics of the selected teacher identity. In this way, a more precise construction of the teacher's identity is carried out. For the referenced algorithm object, it is also transformed according to the process of the relief algorithm [21], and its main process is shown in Figure 7: e flow of the reference ReliefF algorithm in Figure 7 is partly similar to the original ReliefF algorithm. For the calculation of the distance between samples, the formula can be used: a x and a y in the formula, respectively, represent the nearest feature samples with similarity and the nearest feature samples with dissimilarity in the detected data samples. Suppose that the set with r sample individuals is represented by G r , and the original sample set is mapped to form a set with a certain space capacity. e formula is as follows: e a i ′ in the formula represents the i th feature of the sample a ′ in the sample feature capacity set G ′ r formed by the mapping. a x i and a y i are the same as the i-th feature corresponding to a x and a y in the original sample. From this, the sample generated by the mapping has all the characteristic elements of the original sample. e calculation formula can only be performed for one adjacent feature. In order to reduce the generation of errors and the influence of noisy data or abnormal data on space conversion, the formula can be used for calculation: In the formula, n represents the number of samples with the same feature, while m represents the number of samples with different features, and the sum between the two is the total number of tested samples. a xj i represents the coordinate value of the dissimilar adjacent samples of the sample a in the sample, and a yj i represents the coordinate value of the adjacent samples with similar characteristics. Referring to the relief algorithm, the similarity between samples is distinguished by adding different weights to the samples. e solution formula for a sample weight can be expressed as follows: For D(a j ′ ) in the formula, it can be calculated by the formula, and the formula expression is as follows: e weight frame of the reference algorithm can be obtained by superimposing the feature capacity of the sample and the weighted value of the sample. e algorithm can finally get the weight of the sample. In order to achieve the stable operation of the feature selection algorithm, it is combined with the ReliefF algorithm. e most primitive formula of the ReliefF algorithm is as follows: In the formula, a u,i indicates that the coordinate value corresponding to the sample is i, a xj u,i indicates that the dissimilar sample number is the adjacent sample coordinate value of j, and m in the formula indicates the number of samples. Finally, the stable feature selection formula can be obtained: In the formula, q j represents the weight of the sample, q xj u represents the weight of the sample with the sample number j whose samples are dissimilar, and q yj u represents the weight of the sample with similarity. It can be seen from the formula that the size of the weights and the degree of correlation of the sample features are positively correlated [22]. Experiments on Teacher Identity Construction with Feature Selection Algorithms Investigation and Results of Teacher Identity Construction. e survey on the construction of teachers' identity is mainly in the form of questionnaires, taking foreign language teachers in colleges and universities as the research object of this paper. e purpose is to explore the main problems of the identity construction of foreign language teachers in colleges and universities, so as to promote the foreign language learning of college students. rough the survey, some information about the teachers in Table 3 can be obtained. e total number of teachers in Table 3 is 200. It can be seen from the table that most of the young foreign language teachers are younger teachers with shorter teaching time. is is also in line with the characteristics of the teaching profession, and in foreign language teaching, teachers who teach English account for a relatively large number of teachers. For these teachers, the questionnaire uses five aspects to investigate the identity construction of teachers, including teachers' self-image, self-evaluation, professional status, professional motivation, professional emotion, and professional expectation. e evaluation of young teachers of men and women in various aspects is different, and the results are shown in Table 4: From Table 4, the difference between teachers' identity construction and male and female teachers is very small. Only in the teacher's identity construction, there is a big difference in the teacher's own image and major. e average rating of young female teachers for teachers' self-image is 3.78, which is 0.53 higher than that of young male teachers, and the gap between female teachers and male teachers is also 0.35 in the evaluation of teachers' professional prospects. is shows that the identity construction standards of young women foreign language college teachers are mostly based on their majors. In addition, the influence of teachers' age and teaching years on the construction of teacher identity is also investigated, and the results are shown in Tables 5 and 6: Tables 5 and 6 are one of the influencing factors for the construction of teacher identity. In Table 5, the influence of teacher age on the construction of teacher identity is small, but there are certain differences between different age groups. Teachers aged 31-40 have a greater role in the construction of their own professional identity. It can be seen from Table 6 that the teaching years of teachers have no obvious influence on the identity construction of teachers. en, a data analysis of two factors is conducted on the influence of teacher self-evaluation on teacher identity construction. One factor is the knowledge that teachers themselves think is currently lacking and the factors that hinder teachers' growth. e results are shown in Figure 8: From Figure 8(a), teachers of different ages who lack different knowledge have different needs. Among them, the number of teachers aged 20-25 who think they lack relevant knowledge of teaching methods and professional knowledge is the largest, reaching 61 and 22, respectively. Figure 8 Mathematical Problems in Engineering shows that teachers' income, social status, and work pressure are the most important factors in the group of teachers aged 20-30 among the growth-impeding factors for the construction of teachers' identity. In this age group, work pressure has an impact on the construction of teachers' identity, reaching 95 people, which is close to the general number of survey respondents. is also shows that reasonable adjustment of teachers' work pressure is conducive to the construction of their identities. Experiments and Results of Teacher Identity Construction Based on Improved ReliefF Algorithm. In order to verify the improved relief algorithm, it is necessary to perform classification and selection operations on various features for the relevant sample sets. In addition to using the improved relief algorithm, this paper also selects two other algorithms for comparison. e selected feature sample sets are A and B. Finally, the experimental results can be obtained as shown in Figure 9: It can be seen from Figure 9 that for different selection algorithms adopted for different sample sets, the verification results have different changes. Among them, the distribution method of the number and the improved relief algorithm have a smaller feedback range for the data samples. However, the weight algorithm in the unit has a larger variation range for the verification result of the sample, which shows that the operation of the weight algorithm in the unit is not stable enough. e AP of the improved relief algorithm is better than the number allocation algorithm most of the time, and when the feature ratio of the sample is in the range of 30% and 40%, the value of the AP is higher. It can be shown that the improved relief algorithm has higher performance for the selection of samples with multiple features. Experiments and Results of Teacher Identity Construction Based on Spark Algorithm. e Spark algorithm is obtained by optimizing the improved relief algorithm. In the process of operation, it is mainly to measure the running time of the algorithm. In the experiment, the aggregation degree of the algorithm is set to 25, and when the memory of the execution components of the algorithm is 35 G, the number of execution components will have a greater impact on the operation of the algorithm, and the corresponding results are shown in Figure 10: It can be seen from Figure 10 that there is a large gap between the running time of Algorithm A and Algorithm B, and the time used by the latter is about 10 times longer than that of the former. is is the experimental result in the case of specific parameters, except that the number of execution components will have an impact on the running time of the algorithm, the impact of changes in other parameters is very small. In this regard, it is necessary to debug the parameters of the Spark algorithm, and setting the aggregation degree of the sample to 25 will get a better running effect. Conclusion is paper studied the identity construction of foreign language teachers in young colleges and universities. e main purpose of this research was to provide a better foundation for the development of teachers. In traditional classroom education, people often only focus on teachers' professional knowledge and teaching methods, and lack of understanding of the connotation of teachers' identity construction. is is of great significance for changing the previous teaching mode. is paper used two different feature selection algorithms to study the construction of teacher identity and extracted and analyzed the factors that have an important impact on the identity construction of teacher groups, and different algorithms had different effects on the processing of the sample data obtained from the survey. rough the introduction of such algorithms as feature selection, the research on teacher identity construction can be made complete and more valuable to realize the self-improvement of the teacher group. Data Availability e data used to support the findings of this study are available from the author upon request. Conflicts of Interest e author declares that there are no conflicts of interest.
2022-08-02T15:03:55.106Z
2022-07-30T00:00:00.000
{ "year": 2022, "sha1": "dd077914f5758557e4a98005d60505a852c4f7d0", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2022/3090043.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "460aacad8ad8ff910ca109bb885632260ae3398d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
14906307
pes2o/s2orc
v3-fos-license
Effects of Intraoperative Hemodynamics on Incidence of Postoperative Delirium in Elderly Patients: A Retrospective Study Background Postoperative delirium (POD) is a common complication in the elderly. This retrospective study investigated the effect of intraoperative hemodynamics on the incidence of POD in elderly patients after major surgery to explore ways to reduce the incidence of POD. Material/Methods Based on the incidence of POD, elderly patients (81±6 y) were assigned to a POD (n=137) or non-POD group (n=343) after elective surgery with total intravenous anesthesia. POD was diagnosed based on the guidelines of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), using the confusion assessment method. The hemodynamic parameters, such as mean arterial pressure, were monitored 10 min before anesthesia (baseline) and intraoperatively. The incidence of intraoperative hypertension, hypotension, tachycardia, and bradycardia were calculated. Results At 30 min and 60 min after the initiation of anesthesia and at the conclusion of surgery, the monitored hemodynamic parameter values of the POD group, but not those of the non-POD group, were significantly higher than at baseline. Multivariate logistic regression analysis showed that intraoperative hypertension and tachycardia were significantly associated with POD. Conclusions Intraoperative hypertension and tachycardia were significantly associated with POD. Maintaining intraoperative stable hemodynamics may reduce the incidence of POD in elderly patients undergoing surgery. Background Delirium is a disturbance of consciousness and cognition, characterized by altered mental status, inattention, impaired cognition function, abnormal psychomotor behaviors, and disturbance of the sleep-wake cycle [1,2]. Postoperative delirium (POD) usually occurs early after surgery and anesthesia and lasts for a short time with a fluctuating course [3].The incidence of POD ranges from 9% to 87%, and increases with the age of the patient [4][5][6][7][8]. Patients who develop POD have a high risk of many complications such as myocardial infarction, pneumonia, and respiratory failure [9]. The development of POD is associated with higher mortality and morbidity, increased hospitalization, and greater cost [10]. Elderly patients who develop POD have a poor long-term outcome and have the higher risk of institutionalization [10,11]. Therefore, reducing the incidence of POD is important for improving quality of life in elderly patients, and it is important to identify risk factors during surgery and anesthesia to reduce the incidence of POD in elderly patients. The causes of POD are unclear, but it is generally believed that there are many diverse contributing factors. Some risk factors for developing POD have been identified, including older age, mental disease, poor physical condition, trauma, nutritional deficiency, anxiety, and depression [12,13]. Older age has been identified as an independent risk factor for developing POD [14]. POD is also associated with many diseases such as cerebral hypoxia and ischemia, cardiovascular disease, cardiopulmonary bypass, infection, metabolic disorder, sleep disorders, and drug intoxication with the use of benzodiazepines or anticholinergic drugs [12,15,16]. However, little is known concerning the prevention and management of risk factors for POD in elderly patients undergoing surgery. Although many treatment strategies have been implemented to reduce the incidence of POD, including perioperative management and pharmacological, psychological, or multicomponent interventions, these treatments have not resulted in satisfactory outcomes [15,17]. Recently, a meta-analysis indicated that lighter anesthesia may reduce the incidence of POD [16]. It is well known that intraoperative low blood pressure is a potential risk factor for developing POD [18]. However, it remains unclear whether other intraoperative hemodynamic parameters changes are associated with the occurrence of POD. In this retrospective study, we investigated the effect of intraoperative hemodynamic changes on POD in elderly patients following surgery to explore a way to reduce the incidence of POD Patients All patients or their relatives provided written informed consent prior to inclusion in the study. This retrospective study included 480 elderly patients (233 men and 247 women) who underwent elective surgery under total intravenous anesthesia between January 2012 and March 2014. The incidence rate for delirium was 28.5%. Their average age was 81±6 y (range, 75-87 y) and their mean body weight was 71.5±17.5 kg (range, 54-89 kg). The inclusion criteria were age ³75 y; American Society of Anesthesiology class II or III; mini-mental state examination (MMSE) score >23; and >9 y of education. The exclusion criteria were a history of drug abuse; history of neurological or psychiatric diseases or hyperthyroidism; electrolyte disturbance; severe liver or kidney insufficiency; severe visual or hearing impairment; and recent administration of sedatives, antidepressants, or analgesics. Of the 480 patients, 142 patients underwent radical resection for gastric cancer, 134 underwent radical resection for rectocolonic cancer, 137 underwent radical resection for lung cancer, and 67 underwent thoracic or lumbar discectomy. The patients were assigned to 2 groups based on the presence of POD: the POD group (n=137) and the non-POD group (n=343). Cognitive function assessment MMSE was administered as routine to assess cognitive functions after hospital admission. The assessed cognitive functions included time orientation (5 points, maximum) and spatial orientation (5 points); short-term memory (3 points) and delayed memory (3 points); language function, including naming (2 points), repeating (1 point), and writing (1 point); executive function (5 points); and calculation (5 points). The total maximum score was 30. A MMSE score >23 points was defined as normal cognition function [19]. Patients with MMSE score £23 were excluded from this study. Anesthesia procedure All patients were deprived of food for 8 h and water for 2 h before surgery, and did not receive any drugs before the procedure. Electrocardiography, heart rate (HR), blood pressure (BP), and blood oxygen saturation (SpO 2 ) were monitored as routine in the operating room. Prior to anesthesia, the catheters (Becton Dickinson Critical Care Systems, Singapore) were implanted into the radial artery and internal jugular vein for monitoring the invasive arterial BP and central venous pressure (CVP), respectively. After surgery, all drugs were immediately stopped, and patients received patient-controlled intravenous analgesia (PCIA) using a PCIA pump (Zhuhai Fornia Medical, China) in the operating room. Patients were transferred to the Post Anesthesia Care Unit (PACU) and monitored until they had a BIS value >90, full awareness, recovery of muscle strength, normal SpO 2 , removal of tracheal intubation, stable vital signs, and Steward recovery score ³4. The Steward recovery score was assessed for consciousness (0 for no consciousness, 1 for responding to stimuli, 2 for full consciousness); airway (0 for airway requiring maintenance, 1 for maintaining good airway, 2 for coughing on command); and movement (0 for no movement, 1 for no purposeful movement, 2 for purposeful movement) [21]. Anesthesia monitoring The patients' electrocardiography, mean arterial pressure (MAP), HR, SpO 2 , cardiac output (CO), stroke volume (SV), cardiac index (CI), and CVP were monitored using a Mindray Beneview T8 monitor (Shenzhen Mindray Bio-Medical Electronics, China) and a Flotrac Vigileo monitoring system (Edwards Life Sciences, U.S.A.). BIS was monitored using a HXD-1 multifunctional electroencephalogram monitor (Heilongjiang Huaxiang Technical Developing, Harbin, China). MAP, HR, rate pressure product (RPP; the product of HR and systolic blood pressure), CVP, SpO 2 , CO, SV, CI and BIS were recorded at 10 min before (baseline) and 10 min after the initiation of anesthesia, 30 min and 60 min after anesthesia, and the conclusion of surgery. The duration of surgery, recovery time from anesthesia, the incidence of intraoperative hypertension (BP was greater or less than 30% of baseline), hypotension (BP <30% of baseline), tachycardia (HR >100 bpm), and bradycardia (HR <60 bpm), and intraoperative dosages of fentanyl and propofol were calculated. Postoperative analgesia management After recovery of consciousness, all patients were provided with PCIA by fentanyl. The PCIA was programmed to deliver a loading dose of 0.3 μg/kg fentanyl, with a lockout of 15 min, background infusion of 0.1 μg·kg -1 ·h -1 , and single dose of 0.1 μg/kg for 48 h. Postoperative pain intensity and comfort level were evaluated 1, 6, 12, 18, 24, 36, and 48 h after surgery. Pain intensity was assessed using the visual analogue scale (VAS) [22], which ranged from 0 (no pain) to 10 (worst possible pain). The comfort level was evaluated using the Bruggemann Comfort Scale (BCS) [23] as follows: 0 for persistent pain; 1 for no pain at rest but severe pain during deep breath or cough; 2 for no pain at rest but mild pain during deep breath or cough; 3 for no pain during deep breath; and 4 for no pain during cough. POD assessment Surgeons closely observed changes in patients' condition within one week after surgery. When any suspicious symptoms of POD were observed, neurologists were immediately consulted to evaluate the condition of the patient 1-3 days after operation. POD was diagnosed with symptoms of both acute onset of altered and fluctuating mental status and inattention, and either disorganized thinking or altered level of consciousness, using the confusion assessment method [3,20]. Statistical analyses Statistical analyses were performed using SPSS 15.0 software (SPSS, Chicago, IL, USA). Numerical values are presented as the mean and standard deviation. Repeated-measures analysis of variance (ANOVA) was used to compare differences within the same group. One-way ANOVA was used to compare differences between groups. Categorical data were compared using the chi-squared test. The multivariate logistic regression model was used to evaluate the association between POD and the incidence of intraoperative hypertension, hypotension, tachycardia, and bradycardia. P-values <0.05 were considered statistically significant. Results There were no significant differences in gender, age, body weight, or ASA score between the POD and non-POD groups (P>0.05; Table 1). The preoperative MMSE score in the POD group was not significantly different from that of the non-POD group (P>0.05). The percentage of complications such as hypertension and coronary disease, diabetes, and pulmonary disease did not differ significantly between the 2 groups (P>0.05). The hemodynamic parameters of patients in the POD group and non-POD groups were measured at 10 min before the initiation of anesthesia (baseline), at 10, 30, and 60 min after the initiation of anesthesia, and at the conclusion of surgery ( Figure 1A-1H). For both groups, at 10 min after the initiation of anesthesia the MAP, HR, RPP, CVP, CO, SV, and CI values were significantly lower than at the baseline timepoint (P<0.05). In the POD group, 30 and 60 min after the initiation of anesthesia and at the conclusion of surgery, the MAP, HR, RPP, CVP, CO, SV, and CI values were significantly higher than the corresponding measurements at baseline (P<0.05). However, in the non-POD group, 30 min and 60 min after the initiation of anesthesia and at the conclusion of surgery, the MAP, HR, RPP, CO, SV, and CI values were not significantly different from the baseline readings (P>0.05). Thirty minutes and 60 min after the initiation of anesthesia and at the conclusion of surgery, the MAP, HR, RPP, CO, SV, and CI values were significantly higher in the POD group than in the non-POD group (P<0.05). The SpO 2 , CVP, and P ET CO 2 values were within normal range, and BIS values were maintained between 40 and 60 for both groups during the surgeries. There were no significant differences in the duration of surgeries, recovery time, BIS, intraoperative dosages of fentanyl and propofol, or postoperative dosage of fentanyl between the 2 groups (P>0.05; Table 2). The incidence of intraoperative hypertension and tachycardia were significantly higher in the POD group than in the non-POD group (P<0.05; Table 3). We further used the multivariate logistic regression model to determine associations between POD and the incidence of intraoperative hypertension, hypotension, tachycardia, and bradycardia. Based on this analysis, the incidence of intraoperative hypertension and tachycardia was significantly associated with POD (P<0.05, Table 3). Discussion POD commonly occurs in elderly patients after major surgery, and is associated with increased mortality and morbidity, longer hospitalization, and greater medical cost [9,24,25]. Although the causes of POD have not been clearly elucidated, it is believed that it may be the consequence of many predisposing and precipitating factors [26]. Older age is a well-known risk factor for developing POD [14], but since aging cannot be countered it is therefore important to identify controllable risk factors that can be controlled. Patients older than 70 years commonly have poor physiological vital organ reserve, are more sensitive to drugs, and are often complicated with many diseases such as hypertension, coronary heart disease, diabetes, and pulmonary disease, and thereby have lower tolerance to surgery and anesthesia. Stable control of intraoperative hemodynamics must be carefully maintained to meet myocardial oxygen supply and demand balance during surgery for elderly patients. In this study, we investigated the effect of intraoperative changes in the hemodynamic parameters on the incidence of POD in 480 elderly patients, 137 of whom experienced POD. Multivariate logistic regression analysis showed that the incidence of intraoperative hypertension and tachycardia was significantly associated with the occurrence of POD, indicating that stable control of intraoperative hemodynamics may reduce the incidence of POD in elderly patients undergoing surgery. In addition, BP, HR, and myocardial oxygen consumption increased in response to stress responses caused by surgery 1097 and anesthesia, which may worsen heart diseases that often exist in elderly patients. In the present study, during surgery, blood pressure was controlled to £20% of baseline (10 min before the initiation of anesthesia). If BP was greater or less than 30% of baseline, vasoactive drugs were given to maintain the stability of circulation. Stable control of intraoperative hemodynamics can reduce the cardiovascular and cerebrovascular accident. The mechanisms that underlie POD associated with hemodynamic instability remain unclear. It has been reported that the peripheral release of inflammatory cytokines such as interleukin-1b, tumor necrosis factor-a, and interleukin-6 in response to surgical trauma can pass the blood brain barrier. This can result in the activation of astrocytes and microglia in the brain, leading to further release of neurotoxic inflammatory mediators that can subsequently induce delirium [27,28]. Activation of the cholinergic anti-inflammatory pathway can inhibit the peripheral release of inflammatory cytokines and suppress the neuroinflammatory response [29]. In a surgical stress rat model, acetylcholinesterase inhibitors that increase acetylcholine levels inhibited the peripheral protein expression of interleukin-1b and tumor necrosis factor-a and reduced neuroinflammation and degeneration in the cortex and hippocampus [30]. In the present study, increased MAP, HR, RPP, CO, SV, and CI found in the POD patients may have led to sympathetic activation and parasympathetic inhibition that subsequently resulted in a decreased release of acetylcholine. Decreased release of acetylcholine may weaken cholinergic anti-inflammation activity and result in increased peripheral release of inflammatory cytokines, enhanced neuroinflammation, and subsequent delirium. In addition, Longas et al. [31] reported that general anesthesia increased the blood levels of IL-6. Hadimioglu et al. [32] reported that the TNF-a levels were significantly less after general anesthesia and epidural anesthesia compared with the baseline, suggesting that the depth of general anesthesia may protect against inflammatory factors that contribute to POD. Therefore, too-light anesthesia may contribute to POD. We could not exclude the possibility that light anesthesia may have contributed to the POD in elderly patients in the present study. In the present study, the SpO 2 , CVP, and P ET CO 2 values were within normal range and the BIS value was maintained between 40 and 60 for both groups during the surgeries. We excluded patients with preoperative or intraoperative use of benzodiazepines or anticholinergic drugs, thus reducing the possibility that these precipitating factors contributed to the development of POD [16,26,33,34]. In the present study, we found no significant differences between the POD and non-POD groups with regard to gender, age, body weight, ASA scores, preoperative MMSE scores, preexisting diseases, operative time, recovery time from anesthesia, or intraoperative dosages of propofol. These findings suggest that these factors did not significantly contribute to the occurrence of POD. In addition, we found no significant differences in intraoperative dosages of fentanyl and postoperative dosage of fentanyl between the POD and non-POD groups, suggesting that fentanyl did not significantly contribute to the occurrence of POD. However, Tokita et al. [35] reported that fentanyl produced a good analgesic effect and reduced the incidence of POD. The difference may be due to the different dosage of fentanyl used between the 2 studies. Vaurio et al. [36] reported that postoperative pain was a risk factor for developing POD, and effective control of pain reduced the incidence of POD. In the present study, all patients achieved satisfactory analgesic outcomes, and there were no significant differences in the VAS and BCS scores at any timepoint between the POD and non-POD groups, suggesting that postoperative pain may not be a major contributor to the incidence of POD in elderly patients undergoing major surgery. Sieber et al. [13] reported that light sedation decreased the prevalence of postoperative delirium by 50% compared with deep sedation (BIS, approximately 50), indicating that deep sedation was a risk factor for developing POD. In the present study, the BIS values in the POD group and non-POD groups were 55.5±3.4 and 56.2±3.1, respectively. There was no significant difference in the BIS values between the 2 groups, suggesting that sedation (BIS >50) may not be a major contributor to the incidence of POD in elderly patients undergoing major surgery. The incidence of POD varies greatly among reports in the literature, ranging from 9% to 87%, and increases with the age of the patient [4][5][6][7][8]. In this study, the incidence of POD was 28.5%. Since neurologists were only called if patients were suspected of having POD, the incidence of POD may be higher among the patients examined. In addition, in the majority of patients with delirium, the condition was hypoactive, and thus there was a higher risk of it going undiagnosed. Therefore, some patients with hypoactive delirium may have been missed. Conclusions We investigated the effect of intraoperative hemodynamics on the incidence of POD in elderly patients undergoing major surgery, and found that the MAP, HR, RPP, CO, SV, and CI values were significantly higher in POD patients than in non-POD patients. Multivariate logistic regression analysis found that the incidence of intraoperative hypertension and tachycardia was significantly associated with POD in elderly patients. Our study supports that maintaining intraoperative stable hemodynamics may reduce the incidence of POD in elderly patients undergoing surgery.
2016-05-12T22:15:10.714Z
2016-04-03T00:00:00.000
{ "year": 2016, "sha1": "c1728d5a3b05892256c071955c18e64f05c1e491", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4822944?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c1728d5a3b05892256c071955c18e64f05c1e491", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257798112
pes2o/s2orc
v3-fos-license
Association of Urinary Biomarkers of Renal Tubular Injury with Cognitive Dysfunction in Older Patients with Chronic Kidney Disease: A Cross-Sectional Observational Study Epidemiological data suggest that individuals in all stages of chronic kidney disease (CKD) have higher risks of developing cognitive impairment. The relationship between CKD and cognition has been assessed exclusively using glomerular function markers; however, kidney tubule injury has not been assessed. We assessed the association between urinary biomarkers of renal tubular injury and cognitive dysfunction in older patients with CKD Stages 3–4. According to the Montreal Cognitive Assessment, participants were divided into cognitive dysfunction and control groups. Compared with the control group, the cognitive dysfunction group had significantly higher percentages of smokers, noticeably lower average education, and higher mitochondrial DNA (mtDNA) levels in the peripheral blood. Spearman correlation analysis showed that higher urine neutrophil gelatinase-associated lipocalin, kidney injury molecule-1, and beta-2 microglobulin (β2M) levels were significantly associated with lower cognitive scores. Multivariate logistic regression analysis showed that only increased urinary β2M levels were independently associated with cognitive worsening in CKD after adjusting for confounders. Logistic regression identified a promising role of urinary β2M combined with smoking and education for predicting cognitive impairment in CKD. Urinary β2M and cognitive function negatively correlated with mtDNA content, suggesting that mitochondrial dysfunction is a common pathophysiological mechanism linking CKD and cognitive dysfunction. Introduction Chronic kidney disease (CKD) is a complex and heterogeneous disease that continues to outpace clinical management and has been increasingly recognized as a significant health problem worldwide [1]. The global burden of CKD is rising, with estimated prevalence rates of 11-13% in the general population, gradually increasing to more than one-third of adults over the age of 65 years. The United Nations projects that the global population aged 65 years and older will triple from 0.5 billion in 2010 to 1.5 billion by 2050 [2]. In China, the proportion of older individuals (aged 65 years and older) was 17.9% in 2019 [3]. Global aging has far-reaching implications for the health care of older patients with CKD due to the rapidly increasing demand for medical services. Although studies support the notion that cognitive dysfunction begins in early kidney failure [4,5], cognitive impairment is still underdiagnosed, likely due to its subtle presentation and the lack of routine screening in older patients with CKD. There is growing evidence that cognitive disorders can result in decreased adherence to medications and treatment, poor nutrition, reduced quality of life, a loss of independence, heavier caregiver burden, and premature mortality [6]. Therefore, the early identification of cognitive impairment is essential to achieve better surveillance and diagnosis and to develop effective prevention and treatment strategies for patients with CKD. Prior studies evaluating the relationship between CKD and cognitive impairment have evaluated glomerular function and injury exclusively, largely utilizing the estimated glomerular filtration rate (eGFR) or albuminuria, markers of glomerular filtration, to assess kidney health [7]. However, the assessment of renal tubule injury is notably absent. Many urinary biomarkers of renal tubular injury have emerged, such as kidney injury molecule-1 (KIM-1), neutrophil gelatinase-associated lipocalin (NGAL), monocyte chemoattractant protein-1 (MCP-1), and beta-2 microglobulin (β2M), allowing for the clinical assessment of renal tubular health [8]. Compared to glomerular biomarkers, urinary biomarkers of renal tubular injury may enable early detection, the identification of the location of the injury, etiologic discernment, and prognostic prediction of CKD [9]. Our cross-sectional, observational study evaluated the relationship between common urine markers of renal tubular injury and cognitive impairment among older participants with CKD. These findings may provide some clues for improving disease detection and the identification of risk factors for cognitive impairment, early diagnosis of subclinical cognitive impairment, and prediction of adverse events in various clinical settings. Study Population This study was approved by the Chinese Clinical Trial Registry (ChiCTR) with a registration number of ChiCTR2200059887. For this observational study, we enrolled participants between April and September 2022 at the Wuxi People's Hospital, affiliated with Nanjing Medical University. CKD was defined according to the criteria of the United States National Kidney Foundation Kidney Disease Outcomes Quality Initiative. The staging was defined by the eGFR based on the presence of kidney damage and the level of kidney function. The eGFR was calculated using creatinine or cystatin C and the CKD-Epidemiology Collaboration equation, as described previously [10]. Stages 3-4 were categorized as an eGFR of 15-59 mL/min/1.73 m 2 . The inclusion criteria were (1) age ≥ 60 years; (2) Stage 3-4 CKD; (3) proteinuria < 1 g/day; and (4) no active infection or bleeding three months before enrollment. The exclusion criteria were (1) the inability to provide informed consent; (2) illiteracy; (3) hearing or visual disability, which could affect cognitive assessment; (4) a history of chemotherapy or radiation therapy for any cancer; and (5) a history of malignancy, stroke, dementia, or other psychiatric or neurological diseases. All participants provided informed consent for inclusion before participating in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Wuxi People's Hospital, affiliated with Nanjing Medical University (KY22017). Procedure All participants completed self-administered questionnaires in the presence of trained interviewers to collect clinical information, including age, sex, years of education, medical history (hypertension, diabetes mellitus, and coronary heart disease history), lifestyle factors (smoking and alcohol use), and mental health conditions. Blood and urine samples were collected from participants between 6:00 a.m. and 8:00 a.m. following overnight fasting and analyzed at a centralized laboratory. Blood pressure, body weight, and height were also measured. Each subject's body mass index (BMI) was calculated by dividing body weight by height squared (kg/m 2 ). The participants completed a battery of cognitive function tests between 2:00 p.m. and 4:00 p.m. using the Montreal Cognitive Assessment (MoCA). According to the cognitive assessment results, the participants were divided into a normal cognitive group and a cognitive impairment group, with 50 patients in each group. Blood Sample Collection Blood was collected in serum tubes (BD Vacutainer, Franklin Lakes, NJ, USA) and ethylenediaminetetraacetic acid plasma tubes (BD Vacutainer, Franklin Lakes, NJ, USA). The samples were centrifuged for 10 min at 3000× g, and the supernatant was collected, aliquoted into sterile cryovials, and stored at −80 • C until assayed. Serum low-density lipoprotein (LDL) levels were measured in a single central laboratory using standard methods. Assessment of Cognitive Function The MoCA was used to evaluate cognitive function [11]. The MoCA is a valid and standardized screening tool that takes approximately 10 min to complete and is highly sensitive in detecting mild cognitive impairment. The MoCA grades multiple domains of cognition by measuring short-term memory, visuospatial abilities, executive function, language (expression and comprehension), attention, concentration, working memory, and orientation to time and place. The level of education was adjusted by adding one point to the total score for participants with ≤12 years of formal education. The maximum possible score on the MoCA is 30, with higher scores indicating better cognitive function. Cognitive function was dichotomized at 26 points on the MoCA. Participants with a score of <26 or those unable to complete the MoCA were classified as having cognitive impairment. Participants with a score of ≥26 were classified as not having cognitive impairment [12]. Then, the test score was used as a continuous variable in the analyses. The MoCA was administered and scored by a research coordinator following Mandarin version 7.1 instructions available at www.mocatest.org. Mitochondrial Deoxyribonucleic Acid (mtDNA) Analysis Total DNA was extracted from the serum using the TIANamp Genomic DNA Kit (TIANGEN, Beijing, China). Quantitative real-time polymerase chain reaction (PCR) was used to determine the mtDNA copy number according to the method developed by Wong et al. [13]. mtDNA was amplified using primers specific to the mitochondrial cytochrome B gene to generate a standard curve. The mtDNA copy number was normalized to the nuclear DNA copy number by amplifying the acidic ribosomal phosphoprotein P0 nuclear gene. Primer sequences were designed using Primer Premier 5.0 (PitchBook, San Francisco, CA, USA). The primers used were the following: forward 5 -GCCTGCCTGATCCTCCAAAT-3 and reverse 5 -AAGGTAGCGGATGATTCAGCC-3 . The primers for 36B4 were forward 5 -AGGATATGGGATTCGGTCTCTTC-3 and reverse 5 -TCATCCTGCTTAAGTGAACAAACT-3'. PCR was performed using a Rotor-Gene realtime centrifugal DNA amplification system (Corbett Research, Sydney, Australia) with the SYBR Green master mix (Applied Biosystems, Foster City, CA, USA). The thermal cycling conditions were the following: one cycle of 50 • C for 2 min and 95 • C for 15 min, followed by 35 cycles of 94 • C for 20 s, 56 • C for 30 s, and 72 • C for 30 s. The melting reaction was measured with a decrease of 1 • C per cycle between 72 and 92 • C. The mtDNA copy number was calculated using the formula described by Ryang et al. [14], and the cycle threshold values in each quantitative polymerase chain reaction were used to measure the copy number of mtDNA using standard regression analyses. Statistical Analysis The biomarkers were log 2 transformed to minimize skew-biased data. Continuous data were represented as either mean ± standard deviation (SD) or median with interquartile range, depending on normality, which was assessed using the Shapiro-Wilk test. For two independent groups, the means of normally distributed data were compared using t-tests, and the medians of non-normally distributed data were compared using the Wilcoxon test. Categorical variables are expressed as numbers (percentages). The Chisquare test was used to test the relationship between two categorical variables. Correlation and multivariate logistic regression analyses adjusted for potential confounding variables were performed to assess the association between biomarkers of renal tubular injury and cognitive impairment. Receiver operating characteristic (ROC) curve analysis was performed to investigate the diagnostic power and optimal cut-off points for biomarkers of renal tubular injury using the area under the curve (AUC) and Youden index, respectively. Significance was defined as p < 0.05 for all analyses, and all reported p-values were two-sided. Statistical analyses were performed using R, version 3.6.2 (R Core Team, Vienna, Austria). Characteristics of the Study Population In total, 50 patients (33 male and 17 female) with cognitive impairment (mean age 76.0 ± 7.2 years) and 50 individuals (38 male, 12 female) with normal cognition (mean age 74.0 ± 7.9 years) participated in this study. No differences in age, sex, BMI, drinking history, the number of hypertension medications, rates of diabetes, coronary heart disease, LDL values, or eGFR values were observed between the two groups. Table 1 shows the demographic and medical characteristics of the study participants. Years of education were lower among patients with cognitive impairment than among the control participants (p < 0.001, Table 1). There was a greater percentage of smokers among patients with cognitive impairment than among control participants (p < 0.001, Table 1). The data range of basic variables of study population was evaluated (Val min-Val maximum) in the Supplementary Material (Table S1). Comparison of Urinary Biomarkers of Renal Tubular Injury in the Two Groups of Study Participants Significantly increased urinary levels of log2KIM-1 (p = 0.047), log2NGAL (p = 0.004), and log2β2M (p = 0.001) were observed in the cognitive impairment group compared with the control group, while levels of log2MCP-1 (p = 0.48) did not significantly differ between the groups (Figure 1). Comparison of Urinary Biomarkers of Renal Tubular Injury in the Two Groups of Study Participants Significantly increased urinary levels of log2KIM−1 (p = 0.047), log2NGAL (p = 0.004 and log2β2M (p = 0.001) were observed in the cognitive impairment group compared wit the control group, while levels of log2MCP−1 (p = 0.48) did not significantly differ betwee the groups (Figure 1). ROC Curve Analysis of Single Urinary Biomarkers of Renal Tubular Injury and the Multivariable Model The ROC curve analysis revealed that urinary β2M levels were better predictors of cognitive impairment in patients with CKD than urine NGAL and KIM−1 levels (AUC = 0.69 for β2M, 0.67 for NGAL, 0.54 for NGAL, and 0.62 for KIM−1, respectively). Based on S2). We further examined this association using multivariate logistic regression analysis. However, only higher β2M levels (odds ratio = 1.5, 95% confidence interval = 1.10-2.11, p = 0.014) remained significantly associated with a worse cognitive performance after adjusting for education and smoking, while NGAL and KIM−1 failed to show significance in the multivariable analysis ( Figure 3). ROC Curve Analysis of Single Urinary Biomarkers of Renal Tubular Injury and the Multivariable Model The ROC curve analysis revealed that urinary β2M levels were better predictors of cognitive impairment in patients with CKD than urine NGAL and KIM−1 levels (AUC = 0.69 for β2M, 0.67 for NGAL, 0.54 for NGAL, and 0.62 for KIM−1, respectively). Based on ROC Curve Analysis of Single Urinary Biomarkers of Renal Tubular Injury and the Multivariable Model The ROC curve analysis revealed that urinary β2M levels were better predictors of cognitive impairment in patients with CKD than urine NGAL and KIM-1 levels (AUC = 0.69 for β2M, 0.67 for NGAL, 0.54 for NGAL, and 0.62 for KIM-1, respectively). Based on the ROC curve analysis, the best cut-off values for predicting cognitive impairment were 6.62 for log 2 urinary β2M, yielding a sensitivity and specificity of 74% and 60%, respectively ( Figure 4). A logistic regression model combining urinary β2M, education, and smoking was an optimal model for predicting cognitive impairment. An AUC of 0.83 was observed to predict cognitive impairment using this model, with a sensitivity and specificity of 60% and 90%, respectively ( Figure 5). The AUCs of two ROC curves were compared using the DeLong test. The AUC of the logistic regression model was significantly increased com-pared with that of urinary β2-MG alone (p = 0.006), indicating that the diagnostic power can be improved by adding easy-to-obtain clinical characteristics (education and smoking). 6.62 for log2 urinary β2M, yielding a sensitivity and specificity of 74% and 60%, respectively ( Figure 4). A logistic regression model combining urinary β2M, education, and smoking was an optimal model for predicting cognitive impairment. An AUC of 0.83 was observed to predict cognitive impairment using this model, with a sensitivity and specificity of 60% and 90%, respectively ( Figure 5). The AUCs of two ROC curves were compared using the DeLong test. The AUC of the logistic regression model was significantly increased compared with that of urinary β2−MG alone (p = 0.006), indicating that the diagnostic power can be improved by adding easy−to−obtain clinical characteristics (education and smoking). Association of mtDNA Levels in Peripheral Blood with Tubular Injury Markers and Cognitive Impairment mtDNA content, measured as the amount of mitochondrial DNA copy number, was evaluated to assess mitochondria status. The amount of mtDNA was lower in the cognitive impairment group than in the control group (p = 0.006, Figure 6A). To provide more insight into the relationship between renal tubule dysfunction and mitochondrial impairment, a correlation analysis between urinary β2M and mtDNA content was performed. It was suggested that urinary β2M levels negatively correlated with mtDNA content in a statistically significant manner (R = −0.21, p = 0.036, Figure 6B). We tively (Figure 4). A logistic regression model combining urinary β2M, education, and smoking was an optimal model for predicting cognitive impairment. An AUC of 0.83 was observed to predict cognitive impairment using this model, with a sensitivity and specificity of 60% and 90%, respectively ( Figure 5). The AUCs of two ROC curves were compared using the DeLong test. The AUC of the logistic regression model was significantly increased compared with that of urinary β2−MG alone (p = 0.006), indicating that the diagnostic power can be improved by adding easy−to−obtain clinical characteristics (education and smoking). Association of mtDNA Levels in Peripheral Blood with Tubular Injury Markers and Cognitive Impairment mtDNA content, measured as the amount of mitochondrial DNA copy number, was evaluated to assess mitochondria status. The amount of mtDNA was lower in the cognitive impairment group than in the control group (p = 0.006, Figure 6A). To provide more insight into the relationship between renal tubule dysfunction and mitochondrial impairment, a correlation analysis between urinary β2M and mtDNA content was performed. It was suggested that urinary β2M levels negatively correlated with mtDNA content in a statistically significant manner (R = −0.21, p = 0.036, Figure 6B). We Association of mtDNA Levels in Peripheral Blood with Tubular Injury Markers and Cognitive Impairment mtDNA content, measured as the amount of mitochondrial DNA copy number, was evaluated to assess mitochondria status. The amount of mtDNA was lower in the cognitive impairment group than in the control group (p = 0.006, Figure 6A). To provide more insight into the relationship between renal tubule dysfunction and mitochondrial impairment, a correlation analysis between urinary β2M and mtDNA content was performed. It was suggested that urinary β2M levels negatively correlated with mtDNA content in a statistically significant manner (R = −0.21, p = 0.036, Figure 6B). We also attempted to examine the correlation between mtDNA content and cognitive impairment. The loss of mtDNA content was positively correlated with cognitive decline, measured as the MoCA score (R = 0.34, p < 0.001, Figure 6C). also attempted to examine the correlation between mtDNA content and cognitive impairment. The loss of mtDNA content was positively correlated with cognitive decline, measured as the MoCA score (R = 0.34, p < 0.001, Figure 6C). Discussion The prevalence of CKD has increased dramatically in older adults, from 11.13% in the general population to nearly 40% in individuals aged ≥ 60 years, and this may increase even further in the future [15]. Among older adults with CKD, approximately 17.5% have Stage 1-2 CKD, 75% have Stage 3 CKD, and 12.5% have Stage 4-5 CKD [16]. CKD is predicted to be the fifth−leading cause of death worldwide by 2040 [17]. Moreover, the prevalence and degree of cognitive decline increase with advanced CKD stages. When CKD progresses from Stage 3 to Stage 5, the prevalence of cognitive impairment among patients increases from 20-50% to 70% [18]. Studies examining the relationship between CKD and cognitive function have focused largely on hemodialysis patients, and little is known about cognitive dysfunction in pre−dialysis Stage 3-4 CKD. Because these conditions are often detected too late in the CKD course, no effective treatments have been developed to minimize cognitive impairment, alter the course of CKD, or limit the associated morbidity and mortality. Epidemiological data suggest that CKD is strongly associated with cognitive impairment, and this association worsens with deteriorating renal function [19,20]. However, some studies have shown no association between CKD and cognitive function. These studies have mostly focused on glomerular function and injury, leading to mixed results. It is well known that CKD is not limited to the glomerulus. Instead, based on kidney biopsy, tubular atrophy and tubulointerstitial fibrosis are common findings in virtually all forms of CKD, and their severities have consistently proven to be reliable features for predicting the progression to end−stage kidney disease [21]. Tubular epithelial cells are the main constituent cells of the kidney and are highly sensitive to ischemia, hypoxia, poisoning, and other injury factors. Serum creatinine levels Discussion The prevalence of CKD has increased dramatically in older adults, from 11.13% in the general population to nearly 40% in individuals aged ≥ 60 years, and this may increase even further in the future [15]. Among older adults with CKD, approximately 17.5% have Stage 1-2 CKD, 75% have Stage 3 CKD, and 12.5% have Stage 4-5 CKD [16]. CKD is predicted to be the fifth-leading cause of death worldwide by 2040 [17]. Moreover, the prevalence and degree of cognitive decline increase with advanced CKD stages. When CKD progresses from Stage 3 to Stage 5, the prevalence of cognitive impairment among patients increases from 20-50% to 70% [18]. Studies examining the relationship between CKD and cognitive function have focused largely on hemodialysis patients, and little is known about cognitive dysfunction in pre-dialysis Stage 3-4 CKD. Because these conditions are often detected too late in the CKD course, no effective treatments have been developed to minimize cognitive impairment, alter the course of CKD, or limit the associated morbidity and mortality. Epidemiological data suggest that CKD is strongly associated with cognitive impairment, and this association worsens with deteriorating renal function [19,20]. However, some studies have shown no association between CKD and cognitive function. These studies have mostly focused on glomerular function and injury, leading to mixed results. It is well known that CKD is not limited to the glomerulus. Instead, based on kidney biopsy, tubular atrophy and tubulointerstitial fibrosis are common findings in virtually all forms of CKD, and their severities have consistently proven to be reliable features for predicting the progression to end-stage kidney disease [21]. Tubular epithelial cells are the main constituent cells of the kidney and are highly sensitive to ischemia, hypoxia, poisoning, and other injury factors. Serum creatinine levels rise in the acute kidney injury (AKI) course after 24 h, limiting the ability for early detection and intervention in these cases. Biomarkers of renal tubular injury, including NGAL and KIM-1, measured within 4-6 h following AKI, have been demonstrated to predict the risk of AKI well before a rise in serum creatinine [8]. A study of over 1200 biopsies from donor candidates with a healthy kidney showed that tubulointerstitial fibrosis was present in 28% of patients, ranging from 3% in the 20-29 age group to 73% in those aged 70-79 years [22]. Previous studies might have underestimated the extent of the relationship between kidney disease and cognitive impairment. To address this evidence gap, we conducted a crosssectional analysis of an observational study to determine the relationship between urinary biomarkers of renal tubular injury and cognitive function in patients with Stage 3-4 CKD at the Wuxi People's Hospital, affiliated with Nanjing Medical University. Important exclusion criteria included proteinuria of >1 g/day to reduce the influence of damage to the glomerular filtration membrane. In the univariate analysis, NGAL, KIM-1, and β2M levels were negatively correlated with cognitive function. However, only higher β2M levels remained significantly associated with poorer cognitive function after adjustment for confounders in the multivariate logistic regression analysis. β2M is an endogenous low-molecular-weight protein that easily passes through the glomerular filtration membrane and is almost entirely reabsorbed and degraded by proximal tubular cells, with less excretion in the urine [23]. Bianchi et al. reported that β2M is a better endogenous marker of GFR than serum creatinine [24]. β2M is a part of the histocompatibility leukocyte antigen complex on the cell membranes of all nucleated cells that synthesize it. It is released into circulation at a constant rate in normal subjects during normal cell turnover. β2M is freely filtered in the glomerulus before being readily reabsorbed in the proximal tubules, so urinary excretion is low in healthy individuals [23]. Thus, urinary β2M levels can be significantly elevated in cases of reabsorption dysfunction of the proximal renal tubules. Our study demonstrated that urinary β2-MG levels in older patients with CKD and cognitive dysfunction were significantly increased and were negatively correlated with the MoCA score. The observed associations were independent of smoking and years of education. The AUC of the ROC curves was examined for each of the biomarkers of renal tubular injury for their ability to predict cognitive impairment in patients with CKD. The present study demonstrated that urinary β2-MG levels have a better diagnostic value than urine NGAL and KIM-1 levels. Logistic regression modeling identified the combination of urinary β2-MG, smoking, and education as optimal predictors for cognitive impairment. The addition of education and smoking to the model could be because of the observation that they were significantly different between the two groups with respect to the demographic characteristics of our study participants. Adding them to the model may correct this effect. The AUC of the logistic regression model was higher than that of urinary β2-MG only, indicating that the diagnostic power can be improved by adding easy-to-obtain clinical characteristics (education and smoking). We believe that the early identification of cognitive impairment is important to optimize the compliance of patients with CKD, improve quality of life, and minimize premature death. This can be accomplished clinically by changing the care approach in practical ways to slow the progression of impaired cognition. Such strategies may include the optimal use of antiplatelet therapy and statins, meticulous blood pressure control, improved diet, exercise, cognitive stimulation, and retraining. The mechanisms underlying the association between renal tubular health and cognitive impairment are uncertain and require further studies. The cognitive complications of CKD may be linked to an aberrant "kidney-brain axis". Recent evidence suggests that there is a crosstalk between the kidney and brain and that this "kidney-brain axis" is sensitive to mitochondrial dysfunction, chronic inflammatory stress, and other mechanisms that promote vascular aging, which may lead to end-organ damage that is manifested clinically by the high prevalence of cognitive impairment observed during the progression of CKD [25]. This action of the proximal tubules is highly dependent on mitochondrial activity [26]. In the central nervous system, sufficient energy supply, which is required for neuronal survival and excitability, is mostly dependent on mitochondrial sources. Therefore, the brain is much more vulnerable to mitochondrial dysfunction [27]. Shlipak et al. proposed that mitochondrial dysfunction may be particularly important in defining the mechanisms linking kidney disease and cognitive dysfunction [28]. Mitochondrial respiratory chain dysfunction has been reported to be associated with mtDNA abnormalities. Malik and Czajka proposed that the mtDNA content can be a marker of mitochondrial dysfunction [29]. The premise of this theory is that the origins of mitochondrial impartment can be mutations in genes of nuclear DNA encoding mitochondrial proteins or in mtDNA. In contrast to nuclear DNA, mtDNA is more susceptible to damage, such as oxidative stress. In the initial presence of oxidative stress, reactive oxygen species contribute to mitochondrial biogenesis, resulting in an increased ratio between mtDNA and nuclear DNA. However, persistent oxidative stress may lead to the depletion of mtDNA, resulting from damaged mtDNA and proteins. The accumulation of damaged mtDNA may directly contribute to mitochondrial dysfunction playing a significant role in aging, increasing the cognitive decline that occurs with aging and aging-related neurodegeneration [30]. Therefore, according to this hypothesis, mtDNA content may precede mitochondrial dysfunction as an adaptive response and can therefore be a predictive marker. In our study, we attempted a correlation between mtDNA content and cognitive decline, measured as the MoCA score. We found the loss of mtDNA content positively correlated with cognitive decline. Consistent evidence demonstrates that damaged mtDNA is linked to several mitochondrial disorders that have neurologic or cognitive sequelae. In a case-series report, Molnar et al. identified a general pattern of moderate-to-severe cognitive dysfunction across 19 patients with primary mtDNA mutations [31]. Lee et al. found that the mtDNA copy number in peripheral blood is associated with cognitive function in apparently healthy elderly women, which suggests that reduced mtDNA content may be a possible early marker of dementia [32]. It was observed that participants in the high mtDNA copy number group were more likely to have cognitive dysfunction than participants in the low mtDNA copy number group, which reinforced the results that the mtDNA copy number may be useful for monitoring the cognitive decline in older adults [33]. Prado found that circulating mtDNA levels may serve as a potential biomarker to determine the cognitive status of patients with schizophrenia. In this case, they assumed that the importance of mtDNA resides in its predisposition to the accumulation of mutations, capacity to trigger a pro-inflammatory state, and apoptosis, leading to cognitive impairment [34]. In addition, we found that urinary β2M levels negatively correlated with mtDNA content and that cognitive decline positively correlated with mtDNA content. These results suggest that reabsorption dysfunction in the proximal renal tubules and cognitive impairment may share the common pathology of mitochondrial dysfunction. However, the conclusions are hypothesis-generating, and further studies using a larger sample size are required to validate these results. This study has a few limitations. First, this study only included older patients with Stage 3-4 CKD; therefore, the association between urinary β2M and cognitive impairment cannot be generalized to all patients with CKD, which requires a multi-center, large-sample, cohort study. Second, the present analysis reports a cross-sectional association between urinary β2M levels and cognitive impairment. As such, a causal association cannot be established, and prospective studies should further confirm the conclusions. Conclusions Urinary β2-MG, in combination with education and smoking, has the potential to identify cognitive dysfunction in older patients with CKD. Mitochondrial dysfunction may be a common pathophysiological mechanism that links CKD and cognitive dysfunction. Further studies are needed to elucidate this phenomenon and its underlying mechanisms in the context of its potential therapeutic value.
2023-03-29T15:33:42.609Z
2023-03-25T00:00:00.000
{ "year": 2023, "sha1": "cf8f6a090a473db3a6a5dfa1ba89280bf19e2398", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/13/4/551/pdf?version=1679899719", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2a14b23dcc9df2ea27baf42c5280841853aad37", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
168170397
pes2o/s2orc
v3-fos-license
Nature, Calcigender, Nurture: Sex-dependent differential Ca2+ homeostasis as the undervalued third pillar ABSTRACT After many years of sometimes heated discussions, the problem regarding the relative importance of two classical dogmas of the Nature (genes and sex-steroid hormones) versus Nurture (education, teaching-learning etc.) debate, is still awaiting a conclusive solution. Males and females differ in only a few (primordial) genes as is well documented by genomic analyses. However, their sex- and gender-specific behavior and physiology is nevertheless profoundly different, even if they grew up in a similar (educational) environment. By extending the “Calcigender-concept”, originally formulated in 2015, to the simplistic binary Nature versus Nurture concept, a novel framework showing that the sex-steroid hormone-dependent intracellular Calcium concentration is an important third factor may emerge. Although the principles of animal physiology and evolution strongly stress the fact that Nature is always dominant, Nurture can, to a limited extent, play a mitigating role. Introduction The long-running "Nature versus Nurture" debate is about whether human behavior is mainly determined by the person's genes (DNA) or by his/her social/ educational environment, either prenatal or/and during the person's life. To date, the not yet generally accepted consensus is that it is not "or" but that both play a role. The key issue is the relative importance of Nature and Nurture: more or less equal or a dominance of one of them, in particular of "Nature"? Despite all progress in the genetics of sex determination, in (neuro)physiology and in sociological gender studies, some questions remain unanswered. Why is male-female physiology and even more their behavior so different while males and females only differ in relatively few genes? The explanation is that not so much different sex-specific genes, but rather differential gene expressions may cause the differences. A second problem is why there are only two genetic sexes, but several gender forms. [1] Another problem is our rudimentary knowledge about the functioning of our cognitive memory system at the molecular level, particularly about the role of self-generated electricity, electrical signaling, and the plasma-membrane-cytoskeletal complex in this process [2]. It is well documented that sex-steroids play a major role in reproduction-related behavior. Yet, it is difficult to explain why there are only two types of gonads, two families of sex-steroids that are mainly secreted by the gonads, namely androgens and estrogens, that structurally these steroids are not drastically different, but that nevertheless there are more than two gender forms The mainly social implications of the cited questions come together in the questions: To what extent is Homo developed through Nurture? If it is feasible at all, is such development desirable, and to what extent? The intrinsic problem here is that the mode of action of sex-steroids is well documented at the level of action via nuclear receptors-transcription factors (= genomic effects), but much less at the level of the functional role of the cell membranes. These membranes harbor various enzymes that are involved in generating non-genomic effects, e.g. in lipid and steroid biosynthesis, osmoregulation, and even more important in Ca 2+ -homeostasis. It is possible that in the past too much emphasis may have been given to the intranuclear/genetic mode of action of sex-steroids, neglecting nongenomic effects. There are major differences in Ca 2+ -metabolism and homeostasis between males and females. Furthermore, Ca 2+ plays a key role in controlling muscle contraction as instrumental to behavior and gamete production. Why was the role of differential Ca 2+ -homeostasis on genderlinked behavior not considered? [1] To date's sexual reproductive physiology of animals has ancient roots in evolutionary history Researchers in the biomedical sciences versus in the humanities often have a different approach towards the Nature-Nurture debate. In particular, the retrograde timescale differs. In the humanities, gender equality and the means to realize it ever better are a recent key issue. Sex-related genetic differences and differential Ca 2+ -homeostasis are not manipulable, are thus not at the center of their research interest. On the other hand, animal physiologists have a much longer retrograde perspective, but their approach little touches human sociology. They ask questions about how sexual reproduction started at a time in which reproduction was asexual, thus without egg-and sperm cells (gametes), using the principles and mechanisms of regeneration. How the mode of reproduction changed from asexual towards sexual in placental mammals is a fascinating detective story with unexpected plot turns [3]. Some key biochemical signaling pathways have been preserved for many millions of years in all animal species. Others came into being more recently. In the context of signaling, the importance of homeostasis during evolution is a central issue [4]. Whether males and females can or/and should be forced to become more gender-alike in a heterosexual relation mode is only an issue in the species Homo sapiens. No other animal species is known to encourage such convergence. In all non-human animal species, the rule is: A male is a male, and a female is a female: the genetics of sex dominate. Apparently, in the animal Kingdom as a whole fitness of the species seems to be better served by the differences in gender than by the similarities. But Homo sapiens has a superior cognitive memory system that enabled him/her to realize technical improvements in living conditions and in fitness so that not every member of the group/population had to be engaged in food acquisition, care, and protection. New jobs came into being, some of which could, in theory, be done by both males and females, irrespective of their value for reproductive fitness of the population. Gender-competition came into being, in particular for jobs in which muscular strength matters less than cognitive capabilities. This triggered discussions about the relative importance of the genetic memory system (DNA → RNA → Proteins) versus the cognitive memory system. Herein self-generated electrical pulses carried by inorganic ions play a crucial role [2,5], but despite all progress, this memory type continues to be a largely black box. A challenging question is whether both memory systems act independently of each other, or whether they can influence and even change each other, so that the final outcome of this mutual influence is that gender-inequality (in humans) can be manipulated into (more) gender-neutrality. More precisely, for biologists the question is: During hundreds of million years, very well-conserved signaling pathways causal to sexual reproduction, did not show a drive towards realizing male-female behavioral equality, on the contrary. Is it then realistic to think that this classical malefemale binary system can be remodeled in only a few human generations, without interfering in the biochemical signaling pathways, thus only by changes in Nurture? Do gender and sexual reproduction have an (evolutionary) goal? The answer by many people to this question is: Of course, because the ultimate goal is to produce a progeny. Yet, this at first sight self-evident and logical reply is in conflict with a basic rule in evolutionary theory that says that there is no goal whatsoever in evolution, although some recent experimental data suggest that in some circumstances, it may be possible [6]. Long ago, the formation of egg-and sperm cells did not result from planning, but from unplanned mutations. Rather, it was the accidental result of the coming into existence of "aberrant stem cells of the germ cell line" against which the somatic cells of the body developed an (immunological) rejection strategy [3]. Because it failed to kill the growing cells of the germ cell line early in their development, they kept growing (the oocytes in particular) or/and multiplying (in particular the sperm cells). At the end ejection of the gametes from their production sites (ovary and testis) and even of a baby as in humans and other placental mammals, was the only option left for the producing individuals to survive. This contrasts with our belief that producing gametes and a progeny is very good because it increases fitness and assures the continuation of the population. However, from the physiological point of view, being a male or a female indicates a (disease) state controlled by toxic Ca 2+ -levels. [1,3,7] One should also keep in mind that probably both females and males of most animal species do not know that having heterosexual sex is causal to the production of a progeny. For them, a progeny is an unexpected free bonus when having engaged in hormone-driven copulation behavior. Having sex is a stronger drive than producing a progeny. This is an important issue in the discussion about "Sex versus Gender". [1] Reminder of a few well-established key genetic and physiological principles Genetics of sex determination. the human Y chromosome Diploid cells of the species Homo sapiens have 46 chromosomes, of which 44 are autosomes that occur in both males and females, and two are sex chromosomes (XX in females and XY in males) ( Figure 1). For the figure of human male karyogram see Wikipedia: Y chromosome. [8] The form of the sex chromosomes by themselves is not important. In birds e.g., the configuration is ZZ (males) and ZW (females). In the fruit fly Drosophila melanogaster, males have one chromosome less than females, and this is indicated as XO for males and XX for females. For more details and for mechanisms of sex-determination in other species, in particular, non-mammalian species, see textbooks of Developmental Biology. Their variability is high. Yet, the reproductive physiology of non-mammalian species is in many aspects similar to that of placental mammals. As will be outlined later, despite some differences in mechanisms, the common physiological outcome is "differential sex-dependent Ca 2+ homeostasis" which enables females to secrete more Ca 2+ than males, e.g. in yolk-rich eggs and in milk of mammals. Such ability implies that the cells involved in the secretion of high amounts of Ca 2+ through the production and secretion of Ca 2+ -transporting proteins use such mechanism to cope with the problem of the toxicity of high intracellular Ca 2+ concentrations [7]. In most mammals, the Y chromosome is very important in sex determination. Its presence is dominant over that of the X chromosome. The human X chromosome spans approximately 58 million base pairs which corresponds to about 1% of the total DNA in a male cell (Wikipedia: Y chromosome). It carries about 900-1600 genes. The much smaller human Y chromosome carries over 200 genes, at least 72 of which code for proteins. The question is: Out of these 200+ genes which primordial inducer(s) trigger(s) the sex-determining cascade? Here only the system in the species Homo sapiens is discussed in which the well-documented Testis-Determining Factor (TDF), also known are SRY gene/protein, is of utmost importance (see later). Male humans and most but not all mammals have a male-determining gene, SRY, located on the Y chromosome There are so many differences in the morphology, physiology, and behavior of male and female animals in general that one is tempted to think that males and females must differ in many genes. If only mammals are considered, this idea is wrong. In the majority of mammals, namely placental mammals, and the marsupials, the difference-making situation is that, compared to females, males have one extra gene, named SRY (Wikipedia: Testis determining factor) [10]. This intronless gene is located on the short arm of Y ( Figure 2a) (details in: Ensemble, May 2017) [11]. It codes for the SRY protein ( Figure 2b) which is also named Testis Determining Factor (TDF) [10]. In addition, many other genes which are present in both sexes, are differentially expressed during development, in particular under the influence of sex-steroid hormones. The most "primitive" group of mammals, the Monotremes which lay eggs like reptiles and birds do, but which produce milk and suckle their young like the other mammals, have no TDF. It is thought that after the split between the Monotremes and the therians (= marsupials and placental mammals,) the SRY gene may have arisen from a gene duplication of the X chromosomebound gene SOX3, a member of the Sox family. If the SRY gene is active, the fertilized egg (zygote) will develop into a male. At first sight, the presence of an extra gene (SRY) in most male mammals, may seem to be a good argument for stating that males are genetically superior compared to females. But females have 2 X chromosomes while males have only one X, thus in this aspect females are genetically superior. However, when one takes into account that in females one of the two X chromosomes gets inactivated into a Barr body (see later) early in development, the assumed female superiority vanishes. What really matters is the physiological outcome of the genetic basis of sex determination. Here, the fact that females live longer than males in most species leads to the conclusion that in some aspects females are physiologically superior compared to males. Mode of action of the testis-determining factor The SRY/TDF protein acts as the primordial inducer that triggers many other genes into a male-generating activity. [14][15][16] It is active in the nucleus as a transcription factor. To be functional in sex determination, the SRY/TDF protein needs complexation with other transcription factors, in particular SOX9. The resulting protein complex activates still another factor. This initiates a succession of developing structures such as the primary sex cords, the seminiferous tubules, and part of the undifferentiated gonad, turning it into a testis. Further inductions result in the formation of the Leydig cells, which will start secreting testosterone. The Sertoli cells will produce anti-Müllerian hormone. The combination of these hormones inhibits the female anatomical structural growth in males. It also promotes male dominant development. [10] XX chromosomal configuration in women. the barr body As already mentioned in brief before, a diploid chromosome configuration is generally assumed to be better than a haploid one in the event one of the homologous genes does not work properly. Hence, if there are no restrictions, the mammalian female XX configuration could be considered as genetically superior. However, this assumption fails to take into account the peculiar and exceptional mechanism of random inactivation of one of the X chromosomes in all somatic cells, a process that takes place early in development. The inactivated X chromosome remains (temporarily) active for 10-15%, which corresponds to about the number of active genes present on the Y chromosome. Later, the inactive X chromosome remains microscopically visible as a "Barr body" [17] which is attached to the inner side of the nuclear envelope. A Barr body (named after its discoverer Murray Barr) is not only found exclusively in Homo sapiens. It is also visible in the nucleus of those species in which sex is determined by the presence of the Y or the W chromosome rather than by a diploid X chromosome. The process of inactivation of one of the X chromosomes is known as "Lyonization". [18] Thus, functionally, females are for most of their life haploid for their sex chromosomes. If one takes this situation into account, males who have also one X but in addition a Y chromosome, have the genetically superior sex form. This does not a priori result in a "better physiological condition" in males. It is well documented and known in many animal species that females live longer than males. In summary: males of most mammalian species have a testis-determining factor that is absent in females. Although females are diploid for the X chromosome (XX), they only have one X that is active because the second X is randomly inactivated in all somatic cells of the body. Such females are in fact a mixture of two genetically different individuals (nearly identical twins). None of these genetic differences is under the control of Nurture. Sex-steroid hormones Testosterone as an anabolic steroid and estrogens-estradiol as lipogenic steroids Testosterone is the primary male sex hormone, and an anabolic steroid (Figure 3). In most vertebrates, humans inclusive, testosterone is secreted primarily but not exclusively by the Leydig cells in the testes, and, to a lesser extent, in the ovaries of females. In human males, it plays a key role in the development of the testes and prostate. It promotes the appearance of secondary sexual characteristics such as, compared to females, an increased muscle and bone mass, as well as the growth of body hair. Males being more muscular than females is advantageous in situations in which fighting enemies, and hunting increases the protection and fitness of the family and social group to which they belong. It is a major trigger in generating male-specific behavior. It also plays, among still other functions, a role in preventing osteoporosis, indicating that it exerts some of its effects through the Ca 2+ -homeostasis system. Some derivatives of testosterone also have androgenic activity, e.g. dihydrotestosterone is even more potent than testosterone itself in causing similar effects. The steroid Estradiol (E2), also spelled oestradiol, is the major female sex hormone. Estradiol is produced especially within the follicles of the ovaries, but also in other tissues including the testicles, the adrenal glands, fat, liver, the breasts, and the brain. It is responsible for the development and maintenance of female reproductive tissues such as the uterus, mammary glands, and vagina during puberty, adulthood, and pregnancy. It is not as potent as androgens as an anabolic steroid; hence, females are often less muscular than males. In contrast, in the perspective that females have to be prepared for periods of food scarcity in particular for raising their young/children, the role of estrogens in promoting adipose tissue development is beneficial. It increases not only their own fitness, but also the survival of their young, e.g. through lipid secretion along with milk, a process in which the estradiol precursor progesterone, as well as prolactin, play an important role. Estrogens are causally related to the appearance of female secondary sexual characteristics such as the breasts, widening of the hips, and a feminine pattern of fat distribution. It is also involved in the regulation of the estrous-and menstrual female reproductive cycles. A misconception: the sex-specificity of sex-steroids is not clear-cut qualitative, but only quantitative. Aromatase activity Both androgens and estrogens are produced in the body of both males and females starting from cholesterol through a series of reactions and intermediates of which the details will not be dealt with here. Thus, their sex-specificity is not a matter of molecular structure, but only a matter of differences in their concentrations in the blood. In adult males, titers of testosterone are about 7 to 8 times higher than in adult females. In females, the opposite situation prevails, but the testosterone titers, although lower than in males are still relatively high in a woman and non-human females [20].. The difference in sex-steroids titers is mainly due to a sex-specific difference in aromatase activity ( Figure 3). This enzyme that resides in the membranes of the smooth endoplasmic reticulum (SER), like some other enzymes involved in the biosynthetic pathway of steroids, converts testosterone into estradiol, and androstenedione to estrone. Various factors influence the activity of this enzyme. For example, the anti-Müllerian hormone inhibits its activity. Aromatase activity is higher in females than in males. One could say that the males get somewhat more exposed ("poisoned") by testosterone than females, because males have a lower capacity to convert testosterone into estradiol. The opposite holds true for estradiol/estrogens. Although the molecular structures of testosterone and estradiol look alike at first glance, their differential effects on the morphology and physiology are drastic. This follows from their mode of action at the subcellular level. Mode of action of sex-steroids: nuclear and membrane receptors How differential sex-steroid hormonal balances affect every cell of a multicellular organism is often not well understood. One reason is that in contemporary endocrinology the major focus is on control of the expression of steroid hormone-sensitive genes. Such interaction is mediated by nuclear receptors. However, there is a second major target. Through their interaction with membrane receptors sex-steroids also bring about non-genomic effects, e.g. a sex-specific difference in Ca 2+ homeostasis in all somatic cells of the body. Such effects are often less studied and reported in the endocrine literature. Interaction with membrane receptors yields fast effects (often in seconds), while interaction with nuclear receptors is much slower because it involves protein synthesis. It is important to keep in mind that both androgens and estrogens are barely soluble in water. The values in males, e.g., are for testosterone 23.4 mg/L at 25°C (PubChem) and for estradiol 3.90 mg/L at 27°C (PubChem). As a consequence, in order to be transported through the bloodstream from their respective sites of synthesis (gonads, adrenal glands, etc.) these hydrophobic steroids need a lipoprotein carrier in the blood that delivers them at the plasma membrane of all cells of the body. Through hydrophobic interactions, the steroid hormones will move from the blood-borne lipoprotein carrier into the lipid bilayer of the plasma membrane. Because all cellular membranes are lipidrich and fluid, the steroids will start diffusing freely through all connected membrane systems of the cell, and end up in all their membranes: the Rough endoplasmic reticulum (RER), the Smooth endoplasmic reticulum (SER), Golgi, nuclear envelope, mitochondria, etc.. There they may influence many enzymes and signaling pathways, directly or indirectly. The hydrophobic nature of sex-steroids stops them from freely diffusing through the hydrophilic cytoplasm, e.g. to the nucleus, unless they are picked up and transported by a carrier protein with a hydrophobic moiety. Gross effects Even without knowing the details of the interaction of sex-steroids with their receptors, some of the major effects are visible without doing biochemical analyses. As explained above, in both sexes all tissues of the body respond to the sex-specific steroid hormone conditions. In humans, body length, muscle strength, skin properties, distribution and volume of fat/adipose tissue, protein secretion (e.g. through milk production), robustness in time of the skeleton, the types of gametes that are produced, and many more features differ. All these features necessarily depend upon differential protein synthesis (= differential gene activation), and are thus genetically determined. This means that they are governed by the genetic memory system and the central dogma (DNA→ RNA→ Proteins: = Nature), and not by the cognitive memory system (=part of Nurture). Effects on Ca 2+ homeostasis. The Calcigender paradigm Ca 2+ is a very potent and ubiquitous ion in all cells, and its concentration is precisely regulated (Figure 4). Behavioral effects are usually fast. If hormonal effects are involved, they are mediated through the interaction of hormones with their plasma membrane receptors, followed by modulation of intracellular pathways. Although the final outcome is that the intracellular Ca 2+ concentration must be kept very low, particularly in the resting condition of cells, such an outcome can be reached in many different combinations of causal agents. [21] This variability not only yields the outcome that all cells of the body differ in their Ca 2+ homeostasis system, but that such difference also holds at the organismal level. This is the essence of the Calcigender concept as first formulated by De Loof [22]: males and females, and by extrapolation all gender forms differ in their Ca 2+ -homeostasis which means that there probably are as many gender forms with their specific behavior and physiology as there are sexually reproducing individuals in a species [1]. Out of the extensive literature on this topic, only a few examples will be mentioned. If under the influence of steroid hormones the intracellular [Ca 2+ ] rises, cytoskeletal proteins undergo conformational changes. This is well documented in muscle cells, but it also applies to other cell types, in particular, excitable ones. Zylinska et al. [23] investigated the efficiency of selected neurosteroids, and reported that the hormones affect Ca 2+ transport activity, and that this effect depends on the isoform composition of Plasma Membrane Ca 2+ ATPases (PMCAs) as well as on the steroid's structure. PMCAs of which four isoforms occur, keep the free Ca 2+ concentration in the nanomolar range. Zylinska et al. [24]. also found that in excitable membranes (rat cortical synaptosomes) with a full set of PMCAs, estradiol, pregnenolone, dehydroepiandrosterone apparently increased Ca 2+ uptake. Calmodulin strongly increased the potency for Ca 2+ extrusion in (erythrocyte membranes) incubated with 17 beta-estradiol or with pregnenolone. The results indicated that steroid hormones may sufficiently control the cytoplasmic Ca 2+ concentration within the physiological range. Calcium ions are essential for proper neurotransmission. Impairment in cytosolic Ca 2+ concentration and Ca 2+ signaling disturbs neuronal activity, leading to pathological consequences. [25][26][27] Androgens as anabolic steroids The higher anabolic effects of androgens over estrogens substantially contributes to sex-specific differences in muscle development and strength. One of the possible definitions of behavior says that "Behaviour is the total sum of all movements an organism makes". This explains in part the behavioral effects of steroids. Physical training and anabolic steroids mimic each other's effects: an explanation based upon the principles of Ca 2+ -homeostasis Human males are in general more muscular, they are stronger, have a bigger heart, more voluminous lungs, more red blood cells and a more robust, and a more stress-resistant skeleton than females. No wonder that some of their activities, and to some extent part of their behavior as well, rely on these effects. In males the average testosterone values fluctuate between 100 and 1000 nanogram/decilitre while in women the normal values are between 10 and 70 ng/dl. This situation applies to many species of placental mammals. It should not be extrapolated to all animal species. In many invertebrates, the opposite situation prevails: in many insect species, females are stronger than males. [7].This figure illustrates that the huge gradients require incessant "efforts" to keep the Ca 2+ concentration in the cytoplasm at or around a very low concentration of 100 nM. For a more detailed physiological explanation, in particular with respect to the mechanisms indicated by the numbers 1, 2 and 3 see the original Open Access paper [7]. Here, the sex-hormones are not of the testosteroneestradiol type steroids, but of the ecdysteroid-type, of which the titer is higher in reproducing females than in males. [28] Skeletal muscle development requires repeated contraction activity. When one breaks an arm that is next immobilized for several weeks in a plaster cast, the arm muscles start atrophying due to the forced inactivity. After the plaster is removed, it takes at least several weeks of training to make the muscles "get stronger" again. How does muscle/body training make the muscle mass increase thereby mimicking the effect of anabolic steroids, and vice versa? Which is the common denominator of physical training and of (administration of) anabolic steroids? At each contraction of a muscle, Ca 2+ is released from the lumina of the SER of the muscle cells. Reuptake of Ca 2+ restores equilibrium. Some enzymes needed for steroid biosynthesis reside in the membranes of the SER, others in the mitochondria. At rest, the lumen of the SER is loaded with Ca 2+ using it as a temporary storage site. High concentrations of intraluminal Ca 2+ probably inhibit some of the enzymes involved in lipid-or/and steroid biosynthesis. However, in case of muscle contraction during training the Ca 2+ gradient decreases. Perhaps the short-lived decrease in intraluminal Ca 2+ is sufficient for lifting the inhibition by the high Ca 2+ concentration in the lumen of the SER, allowing the synthesis of a small amount of steroids. These steroids then activate the synthesis of muscle proteins (actin, myosin, etc.) stimulating muscular growth. The effectiveness of the intake (oral, injection) of anabolic androgenic sex-steroids competes with that of physical training that causes a moderate local increase (in the muscle cells themselves) of androgenic steroids Progesterone and estrogens are "lipogenic" hormones, in particular in the context of pregnancy Steroid hormone concentrations in blood (titers) can substantially fluctuate in particular in woman during their menstrual cycle. In many animal species in which reproduction is seasonal, steroid hormone concentrations are linked to particular environmental conditions, e.g. the length of the photoperiod. In general egg, formation involves deposition and accumulation of substantial amounts of yolk material which is rich in proteins, lipids, and glycogen. In Placental mammals with their yolkless eggs, this is not the case anymore. However, the production of milk to nourish the newborn young also requires the mobilization of nutrients, either directly from the ingested food, or from the mobilization of nutrients, in particular lipids, that were stored during pregnancy in adipose tissue. Women normally gain 10-16 kg in weight during pregnancy, a successful strategy in times when food was or is scarcer than it is today in many countries. Such an effect has little to do with Nurture or cognition. Nurture is intimately linked to learning (imitation, selflearning, teaching, …). Learning implies the presence of a "cognitive memory system". All cells in both prokaryotes and eukaryotes must have such a system, otherwise they cannot engage in solving problems. [29,30] The cognitive memory system with the self-generated electrical activity of cells as a major foundation, is different from a genetic memory system (DNA → RNA → Proteins). [2,31] During the pioneering days of experimental endocrine research on the possible influence of sex-steroids and cognition, Christiansen and Knussmann [32] investigated in a group of 11 healthy young men whether a correlation exists between certain cognition activities and titers of testosterone and 5 alphadihydrotestosterone in serum and saliva. Several spatial and verbal tests were used. Within the normal physiological range of androgen levels, a positive correlation with spatial ability and field-dependence-independence, and a negative correlation with verbal ability were found. One should keep in mind that a correlation is no proof for a causal relation. Ulubaev et al. [33]. rightly stated that a distinction should be made between long-term effects of sexsteroids acting through developmental processes and short-term effects acting through learning through the cognitive memory system. The authors reported that although there is convincing evidence that sexsteroid hormones play an organizational role in brain development in men, the evidence for positive effects of sex hormones affecting cognition in healthy men throughout adult life remains inconsistent. To address this issue, they proposed a new multifactorial approach which takes into account the status of other elements of the sex hormones axis including receptors, enzymes, and other hormones. Humans are not an acceptable model for studying the effects of sexsteroids on cognitions because administration of sufficiently high doses of sex-steroids is likely to yield unwanted side effects. Hence, studies were done in humans in which changes in the natural cycles occur, e.g. in menopause, or which were surgically treated. Others were carried out in model placental mammals such as nonhuman primates [34]; or in e.g. rats. [20,[35][36][37] Information was also obtained from studying neurodegenerative diseases, e.g. Alzheimer. [26,27,34] The question that was asked was whether differences in Ca 2+ homeostasis can be altered by teaching-learning -imitation? The answer is: no. Perhaps, imposed drastic changes in feeding regime may have some influence on behavior, but this could also cause illness. However, changes in Ca 2+ -homeostasis affecting some aspects of behavior occur during female reproductive cycles (menstrual cycle, pregnancy, breastfeeding, and menopause). The sex versus gender issue This topic, with emphasis on the problem that there are only two sex forms (male and female), but that there are multiple gender forms ( Figure 5) has been covered in depth by De Loof [1] and will not be repeated here. Classical biological roles In the perspective of the continuation of any heterosexual population over time, the key activity of males and females is the production of gametes. This activity develops very early in embryonic development. It is genetically determined, and it does not require the presence of a partner of the opposite sex. However, in addition to the production of gametes itself, a second indispensable activity is required. Indeed, because gametes are only surrounded by a very thin plasma membrane, they are very vulnerable to harsh environmental conditions, e.g. desiccation. The risks are minimized by a variety of strategies aiming at bringing heterogametes into close proximity as efficiently as possible. In placental mammals and other terrestrial animals, this requires sex-specific types of pre-mating and mating behavior that have to mutually match [1]. Social gender roles As formulated in Wikipedia: Gender role [38], "A gender role, also known as a sex role, is a social role encompassing a range of behaviors and attitudes that are generally considered acceptable, appropriate, or desirable for people based on their actual or perceived sex. Gender roles are usually centered on conceptions of femininity and masculinity, although there are exceptions and variations. The specifics regarding these gendered expectations may vary substantially among cultures, while other characteristics may be common throughout a range of cultures. There is ongoing debate as to what extent gender roles and their variations are biologically determined, and to what extent they are socially constructed." A recent example of the belief that gender-neutrality is desirable is the video entitled "Boys don't cry" published in 2016 by The American Psychological association. The commentary says that "Growing up, boys often hear "Boys Don't Cry" as a stereotypical test of manhood. However, this stigmatizes normal human emotions, negatively affecting boys and men. In this short film, we want to change the landscape of emotions for boys and change how masculinity is interpretedwe want to let boys know that it is okay to show emotions". The ancient Greek difference in the education of boys between Sparta and Athens illustrates that different opinions on such topics already date from at least a few millennia ago. Is (more) gender neutrality desirable? In recent decades the number of efforts to change aspects of prevailing gender roles, which by some groups but in particular by the feminist movement, are believed to be oppressive or inaccurate keeps increasing. [38] A counter-reaction originating from a masculinist movement seems to gain ground. For constructing the respective claims, arguments based on physiological and evolutionary insights are seldom used, a missed opportunity, which is partially corrected in this paper. From an evolutionary point of view, biologists can only observe that "gender identity" and even more "gender neutrality" [39] is only an issue to a (small) percentage of the individuals of one species, namely the species to which we belong ourselves, the placental mammal Homo sapiens. Unanimity about its desirability or as the means to realize gender neutrality is inexistent, to the contrary. To our knowledge, no other species of placental mammals stimulates its young to develop into the direction of genderneutrality. The reason is simple: in their respective environments, gender-neutrality would very likely result in decreased reproductive fitness. Yet, gender neutrality in its purest form does exist in the animal kingdom, but not in mammals. In some molluscs, namely in some snails, in earthworms, and tunicates hermaphroditism is very common. It is also found in some fish species, less in other vertebrates. In some hermaphroditic snails, a reproducing adult functions as a male for one day, thus transfers sperm to another individual that that day behaves as a female. Another day it turns into a female that can be fertilized by an another individual that behaves that day as a male. This is a consequence of the fact that such animals are hermaphrodites (thus that can produce both eggs and sperm cells, not necessarily at the same moment), but that do not self-fertilize themselves. Selffertilization decreases genetic variability, hence it should be avoided. Hermaphroditism, although attractive at first sight, is not a major reproduction strategy in the animal kingdom, but in plants it is. Discussion The days that worldwide many people thought, not to say were convinced, that on the average, townsman-and woman were more intelligent that rural ones, that there were racial differences in intelligence, that the classical labor division between man and woman was the normal consequence of only their genetic differences, and most of all, that there are also inborn sex-differences in intelligence between males and females, are largely (not completely) laying behind us. Current school performance of many millions of pupils worldwide proves that the cited "intuitive assumptions" about gradations in "gender performances" (to use a neutral term) were largely Nature-based, thus inborn, need adjustments. The major cause of the change in thinking is the enormous increase in knowledge, both in the humanities and in the exact-and biomedical sciences, as well as the introduction of the term "gender". Nowadays "Gender" [40] and "Gender role" [38] are at the very heart of many discussions. This term is very commonly used in the humanities, but much less in classical biology as a discipline of the exact sciences. Fundamental questions asked by biologists are: Why are there only two sexes, one that can produce sperm cells and the other that produces the bigger egg cells, while there are multiple gender forms? ? How does sexual dimorphism come into existence during early development? [41] How was the counterintuitive concept reached that there are as many gender forms as there are gamete producers, and that none is superior over the others? [1,42] Another intriguing question is: How did the heterosexual system of reproduction come into existence in the course of evolution? [3,22] Why did it gradually overrule asexual reproduction, thus reproduction without gametes but that is based upon the principles of regeneration? In contrast, the approach of the humanities focuses much more on the behavioral aspects in gender, in particular on the various aspects of interactions with other individuals with respect to reproduction. The different "retrograde time perspectives" also matter in the binary Nature versus Nurture debate. In the humanities, specifically in sociology, psychology and education, the focus is on the present day situation. For example, in some countries, the "political scene" gets more and more confronted with issues related to gender, such as equal rights Figure 5. Cartoon illustrating the idea that the main difference between the various gender forms resides in the Ca 2+ -homeostasis system, in particular in some brain areas. Given that the human brain contains about 100 billion nerve cells, it is de facto impossible that two individuals have exactly the same Ca 2+ -homeostasis system in the totality of their brain, even if these two individuals are identical twins. This figure illustrates the commonly observed situation that the sexual thinking and behavior of transgenders reflects more the situation of the other heterosexual somatic sex than their own somatic genetic sex. Between these two depicted extremes, numerous intermediate forms are theoretically possible. Indeed, it is more likely that not the whole brain but specific brain regions can display (subtle) changes in Ca 2+ homeostasis with effects on behavior as a result. Copied from De Loof [1] (own work), no copyright permission required. for non-dominant gender forms as well as gender-neutrality in various social environments and the civil status of transgenders. The methods used in the humanities for analyzing a problem barely rely on good knowledge of genetics and of animal physiology. The opposite situation prevails in the biomedical sciences. Here, the contemporary situation in the species Homo sapiens is framed in the genetic-, biochemical-physiological, endocrine and evolutionary perspective of the whole animal Kingdom. Thus, a major cause for the lack in unanimity in the Nature versus Nurture debate stems from speaking quite different scientific languages, and using specialized technical vocabularies. Instead of emphasizing the advantages of division of labor/tasks on the basis of pre-existing genetic differences as was common practice in the past, a recent tendency is towards pushing human society towards gender-neutrality. Gender-neutrality [39] (adjective form: gender-neutral), also known as genderneutralism or the gender neutrality movement, describes the idea that policies, language, and other social institutions should avoid distinguishing roles according to people's sex or gender, in order to avoid discrimination arising from the impression that there are social roles for which one gender is more suited than another". [39] Some positions propagated by some feminist and masculinist groups are controversial. It is essential that in the Nature versus Nurture discussion one should not only focus on the conscious or unconscious discrimination of a particular gender form for some types of jobs, one should also take into account the reasons why some jobs are more attractive to men, others to women. The factor "I like such job", is at least as important as "I could do it, if I have no other choice". A man can become a midwife, but few will spontaneously opt for such job. And young mothers may have a preference for a female midwife. Few woman opt for a job in e.g. road construction, for other reasons than because such a job requires too much muscular labor. Muscular labor is less and less an issue because in complex technology-minded societies machines drastically replaced heavy muscular labor by mainly males. Concurrently intellectual work and activities (the services society) have become more and more important. It gradually results in gender-neutrality with respect to "Who has the capacity can do the job irrespective of the gender issue". Selective competition with its inherent conflicts may result. Can Nurture, through all its means instrumental to cultural evolution [31] (many years of education, teaching-learning, imitation, imprinting, forced political or/and social pressure etc.) cause any changes in the genetics, endocrinology, or Ca 2+ -homeostasis of individuals? Unless some body-foreign substances would be administered or without artificial genetic manipulation, the answer is clearly: no. Indeed, the link between sex-specific steroid hormone titers and differential Ca 2+ -homeostasis is so comprehensive that it is biochemically impossible that the cognitive memory system could substantially redirect inborn sex-linked behavior into behavior typical for another gender form. We have to accept that as long as our species will exist, it will continue to carry its physiological, social and psychological evolutionary history. The classical binary approach in which only Nature (genes) and Nurture (education) are more or less equally important in bringing about gender and its behavioral consequences, but in which the Calcigender input is fully neglected, is no longer tenable. The importance of Nature which, among other activities also encompasses the physiology of Ca 2+ -homeostasis and signaling, largely outweighs that of Nurture. It also means that Man is not makeable. He/she is only dirigible in his/her behavior in a limited way. That does not mean that mitigating some "inborn" behavioral traits would a priori be impossible, to the contrary. More gender-neutrality can be advantageous and desirable in some circumstances, but it should not be imposed with approaches that violate the principles of genetics, endocrinology and animal physiology. It can be hoped for that the contribution of biology to the gender-, and Nature-Nurture debates may yield better insights and more tolerance.
2019-04-03T13:12:04.089Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "bc7cfba43a5fa40bea01afbabc6dad83f0115ce0", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19420889.2019.1592419?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc7cfba43a5fa40bea01afbabc6dad83f0115ce0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
67779379
pes2o/s2orc
v3-fos-license
Explicit construction of non-stationary frames for $L^2$ We show the existence of a family of frames of $L^2(\mathbb{R})$ which depend on a parameter $\alpha\in [0,1]$. If $\alpha=0$, we recover the usual Gabor frame, if $\alpha=1$ we obtain a frame system which is closely related to the so called DOST basis, first introduced by Stockwell and then analyzed by Battisti and Riba. If $\alpha\in (0,1)$, the frame system is associated to a so called $\alpha$-partitioning of the frequency domain. Restricting to the case $\alpha=1$, we provide a truly $n$-dimensional version of the DOST basis and an associated frame of $L^2(\mathbb{R}^d)$. Introduction One of the most intriguing problems of modern Time-Frequency analysis is the construction of new efficient methods to represent signals, which can be one dimensional or, more often, multidimensional, such as digital images. The increasing amount of data and their complexity forces to develop optimized techniques that address the representation in a fast and efficient way. The starting point of this paper is the definition of the S-transform, first introduced by R. G. Stockwell et al. in [31] as dt. The main novelty of the S-transform is the frequency depending window. The leading idea is the heuristic fact that, in order to detect high frequencies, it is enough to consider a shorter time. Therefore, the width of the Gaussian is not fixed but depends on the frequency, shirking as far as the frequency increases. The S-transform was introduced to improve the analysis of seismic imaging, and it is now considered an important tool in geophysics, see [1]. In [21], the connection between the phase of S-transform and the instantaneous frequency -useful in several applications -has been studied. See [5,6,13,18,22,24,26,36] for some applications of the S-transform to signal processing in general. From the mathematical point of view, M. W. Wong and H. Zhu in [35] introduced a generalized version of the S-transform as follows where ϕ is a general window function in L 2 pRq. The S-transform has a strong similarity with the STFT, actually it is also possible to show a deep link with the wavelet transform, see [17,32]. In fact, the S-transform can be seen as an hybrid between the STFT and the wavelet transform. Representation Theory provides a very deep connection among S-transform, STFT and wavelet transform, as they all relate to the representation of the so-called affine Weyl-Heisenberg group, studied in [23]. This connection has been highlighted in the multi-dimensional case by L. Riba in [28], see also [29]. The affine Weyl-Heisenberg group is also the key to represent the α-modulation groups, [7]. Our analysis focuses on the DOST (Discrete Orthonormal Stockwell Transform), a discretization of the S-Transform, first introduced by R. G. Stockwell in [30]. In [4], the DOST transform has been studied from a mathematical point of view. It is shown that the DOST is essentially the decomposition of a periodic signal f P L 2 pr0, 1sq in a suitable orthonormal basis defined as e 2πi ηpt´τ 2 p´1 q , t P R, p ě 1, τ " 0, . . . , 2 p´1´1 , with the convention that D 0 ptq " 1, see Section 3 for the precise description of the basis functions. The DOST basis has a non stationary time-frequency localization, roughly speaking the time localization increases as the frequency increases, while the frequency localization decreases as the frequency increases. Therefore, the basis decomposition of a periodic signal is able to localize high frequencies, for example spikes. The time-frequency localization properties of the DOST basis imply that the coefficients f p,τ " pf, D p,τ q L 2 pr0,1sq , represent the time-frequency content of the signal f in a certain time-frequency box, which is related to a dyadic decomposition in the frequency domain. Moreover, this decomposition can be seen as a sampling of a generalized S-transform with a particular analyzing window which is essentially a box car window in the frequency domain. The DOST transform gained interest in the applied world after the FFT-fast algorithm discovered by Y. Wang ang J. Orchard, see [33,34] for the original algorithm and [4] for a slightly different approach which shows that the fast algorithm is essentially a clever application of Plancharel Theorem. The orthogonality property is clearly very useful for several applications, nevertheless it is well known that, in order to describe signals, a certain amount of redundancy can be very effective. Therefore, it is a natural task to look for frames associated to the DOST basis. We follow the idea of the construction of Gabor frames: If g is a suitable window function, for example a Gaussian, and αβ ă 1 then Gpg, α, βq is a frame, see for example [20]. It is possible to consider the Gabor frame as the frame associated to the standard Fourier basis, which is formed by all modulations with integer frequency, extended by periodicity and then localized using the translation of the analyzing window function g. Inspired by this approach, we consider the system of functions For simplicity, here we have not introduced a frequency parameter. The idea of the system in (1.3) is to consider the DOST basis D p,0 and then localize it with a window function g. The main difference from the standard Gabor system is that the translation parameter k ν βppq is not uniform, but depends on the frequency parameter p. Therefore, (1.3) can be considered as a non stationary Gabor frame, using the terminology introduced in [2]. Inspired by the theory of α-modulation frames, see [14], [15], [16], [19]; in Section 3 we introduce a family of bases of L 2 pr0, 1sq depending on a parameter α P r0, 1s. If α " 0 we recover the standard Fourier basis, if α " 1 the DOST basis; when α P p0, 1q we show that the basis is associated to a suitable α-partitioning of the frequency domain in the sense of [16]. In Section 4, we prove the main result, see Theorem 4.13; we show that for each α P r0, 1s the localization procedure explained above produces frames of L 2 pRq, provided the time and frequency parameter are small enough. The main tool is a non stationary version of the Walnut representation, see Subsection 4.2. In Section 5, we analyze the higher dimensional case. Restricting to α " 1, we consider a multidimensional partitioning of the frequency domain and the associated frames. This construction is different to the usual extension of the DOST to higher dimensions. In [33,34], the DOST applied to two dimensional signals (digital images) was essentially the one dimensional DOST applied in the vertical and then in the horizontal direction, therefore it was not a truly bi-dimensional version of the transform. Notations In the paper we use the following normalization of the Fourier Transform Fpf qpωq "f pωq " ż e´2 πixω f pxqdx. In the multidimensional case, we do not write explicitly the inner product in R d , we leave the same notation of the one dimensional case. Set and Let f, g P L 2 pR d q, we denote the L 2 -scalar product as xf, gy " ż fḡdx. We denote SpR d q the Schwartz space. We then set for all p P N B α p,τ ptq " See Figure 1 for a plot of real and imaginary part of such basis with different values of α and p. For p P Z´we define B α p,τ ptq " B´p ,τ , τ " 0, . . . , β α p´pq´1. Notice that if α " 0 then β 0 ppq " 1, for all p, therefore τ is always zero, hence B 0 p,0 ptq " e 2πi pt . That is Ť p B 0 p,0 is the ordinary Fourier basis of L 2 pr0, 1sq. If α " 1, then i 1;0 " 0, s 1;0 " 1 and then i 1;p " 2 p´1 , s 1;p " 2 p , β 1 ppq " 2 p´1 for p ě 1. Therefore Figure 1. Elements of the DOST bases. Notice that α " 0 is the classical Fourier basis which has no localization in time. As α or p increases we gain time localization. The black line and the dotted-red one represents the real and imaginary part respectively. while B 1 p,τ ptq " That is Ť p,τ B 1 p,τ is the so called DOST basis, see [4], [30]. For now on, we restrict to positive integers, all the results hold true also for negative integers via simple arguments. That is the above partitioning is an α-covering. Remark 3.1. Notice that the α-partitioning introduced above is well defined for all α P r0, 1s. The case α " 1 is not defined as a limit case, as in the usual analysis of α-modulation spaces, see [14,16]. The key point is that we use an iterative scheme, instead of a definition involving the function p α 1´α . In this way, we can get rid of the singularity which arises at α " 1. At α " 1 the growth of βppq with respect to p is exponential while for α P r0, 1q is polynomial, see Figure 2. Using the iterative definition of βppq this fact causes no problems. Proof. Notice that for α " 0 and α " 1 the Theorem holds true in view of well known properties of Fourier basis and of results proven in [4]. The proof follows closely the argument in [4]. So, we can restrict to the case p " p 1 . Since e 2πi kt ( kPZ is an orthonormal basis of L 2 pr0, 1sq, we Let us suppose τ´τ 1 ‰ 0, then we can writè In equation (3.2) we used well known properties of geometric series and the fact that τ´τ 1 βαppq is not an integer number and therefore e 2πi pτ´τ 1 q{βαppq ‰ 1. Step Therefore, to prove that the functions B α p,τ are a basis of L 2 pr0, 1sq it is sufficient to check that Ť βαppq´1 Let us consider the projection of (3.3) into the Fourier basis. We obtain the system of equation We can rewrite the above equation as a linear system The square matrix in (3.5) is a Vandermonde matrix. Since the entries`e´2 πi piα;p`jq{βαppq˘β α ppq´1 j"0 are all distinct the determinant of the matrix is zero and therefore the system in (3.5) has only the null solution, that is in (3.4) and (3.3), α τ mus vanish for all τ " 0, . . . , β α ppq´1. 3.1. Localization properties of the functions B α p,τ . Now, we investigate the time-frequency localization properties of the functions B α p,τ . They clearly have compact support in the frequency domain and the support is precisely I α;p . Therefore, they cannot have compact support in the time domain. Nevertheless, it is possible to determine localization properties in the spirit of Donoho-Stark Theorem, [10]. Property 3.3. For each α P r0, 1s and p, τ , the following inequality holds if τ " 0, the interval must be considered as an interval in the circle: Proof. The proof is based on Taylor expansion and Gauss summation formula. In [4] the Property is proven for the case α " 1, actually the same proof works also for general α P r0, 1s. In order to give a more flexible structure to the DOST basis, we generalize it to a redundant and non-orthogonal system and show that this leads to a L 2 -frame. First, we address the 1-dimensional case in full generality taking into account the phase-space tiling dependent from the parameter α defined in the first part of the paper. Frame definition. The idea is to extend by periodicity the DOST basis and then localize them using a suitable window function. where I α;p is defined in Section 3. Then, we define the α-DOST system Figure 3 for the plot of the frame elements. We did not plot the case α " 0 which is the standard Gabor frame element. Remark 4.2. Notice that the Fourier transform of (4.1) simplifies into This kind of system of functions has affinities with the well known family of Non-Stationary Gabor frame, see [3,11,12]. The function Φ µ p;α pωq " ÿ ηPµIα;pφ pω´ηq mimic a bell function of the set I α;p as shown in Figure 4. Sometimes, when we want to highlight the dependence of the element ϕ α p,k pxq on the frame parameters, we use the notation ϕ α;µ,ν p,k . Remark 4.4. As we pointed out is Section 3, when α " 0 the DOST basis reduces to the Fourier one. As one may expect, when we analyze the same case for the α-DOST system above, we end up with a Gabor frame. Precisely: Indeed, recalling that β α ppq " 1 when α " 0, i α;p " p we thus obtain We investigate the condition under which the family D α pµ, ν, ϕq is a frame of L 2 pRq. 4.2. Walnut-like Formula. We retrieve a Walnut-like formula for our DOST system in the spirit of Gabor analysis. This representation is a very useful tool to prove the frame property. Moreover, when α " 0, we recover the Walnut representation for Gabor frames. Lemma 4.5. Suppose that for some ε ą 0 and C ą 0 we have |ϕpxq| ď Cp1`|x|q´d´ε and |φpωq| ď Cp1`|ω|q´d´ε. Then for γ ą 0, where d is the dimension of the base space. We can state the main result of this section. Lemma 4.6. Let ϕ, ψ P L 2 pRq,f P C 8 c pRq, and Figure 3. Plot of the elements of the α-DOST frame in time, with a Gaussian function as a window and µ " 1{2. As we did for the bases, we observe that the localization increases when α or p grows. The black line and the dotted-red one represents the real and imaginary part respectively. Given ω P R, we know that there are at most C 2 indexes for which Φ µ p;α pωq ą 0. Moreover, Φ µ p;α ď C 1 , thus the upper bound of (4.7) holds true. On the other hand, for the same ω, there exists at least one p such that Φ µ p;α pωq ě C 3 . Remark 4.10. An explicit example of an admissible function is the compact version of a Gaussian; preciselyφpωq " χ ε pωq 1 ? 2 e´π ω 2 , where χ ε is a bounded smooth function such that In [27], similar functions are considered. Conjugate Filter. We can define a way to represent functions even without a canonical dual frame; namely, we construct a conjugate filter for the window function ϕ. This technique is a powerful tool for numerical implementations, see [27]. Set (4.10) then, the product between Ω µ;ν p;α and Φ µ p;α forms almost a partition of unity ÿ pPZ Ω µ;ν p;α pωqΦ µ p;α pωq " ν. We have the following corollary: Then, for any f P L 2 pRq Proof. We notice immediately that supp Ω µ;ν p;α pωq " supp Φ µ p;α pωq. We take the Fourier transform of (4.12), thus Repeating the same procedure as in the proof of Theorem 4.11, i.e. excluding the terms with k ą 0, we can conclude that, for ν small enough, pPZ Ω µ;ν p;α pωqΦ µ p;α pωq. (4.14) By (4.11), equation (4.14) becomes Finally, applying the inverse Fourier transform, we obtain (4.12). 4.5. DOST frames, general construction. In this section we prove that we can build up a α-DOST frame with milder hypothesis on the window function compared with the ones of the previous section. Preparatory Lemmata. We need some result concerning the decay of the elements Φ µ p;α outside the sets I α;p . We can write For now on, let us suppose νµ ă 1; then we notice that the term in (4.18) is identically zero for each p P Z, since the supports are disjoint. Clearly this is not restrictive since we are considering the limit for ν Ñ 0. We remark that: (4.22) dˆω´k β α ppq ν , µI α;p˙ěˆ| k| ν´µ˙β α ppq , @ω P µI α;p , with |k|´νµ ą 0, k P Zzt0u. We then analyze the term in (4.19). For each ω there exists a unique p " p such that ω P µI α;p , hence for each ω We made use of the decay proven in Lemma 4.14 and of (4.22). Since N´1 ą 1 and the inequality (4.23) does not depend on ω, we obtain Since the sum is convergent uniformly with respect to ν, the term in (4.24) goes to 0 as ν Ñ 0 with the rate ν N´1 . In order to consider the term in (4.20), notice that for all ω P R, there exists a uniquep such that ω P µI α;p . Moreover, arguing as above, for all p P Zz tpu there exist a unique k p P Zz t0u such that ω´k p βαppq ν P I α;p . Hence, we can write (4.20) as Finally, by (4.16), we can write The sum in (4.26) converges uniformly with respect to ω by Lemma 4.15, therefore the whole sum goes to zero as ν does. The last term to be taken into account is (4.21), which includes the "tails" of our window function. Fix p P Z, by definition if either ω P µI α;p or ω´k βαppq ν P µI α;p , then Define the set Notice that only a finite number of element k belong to this set; precisely, when µν ă 1, then |F ω p | ď 2, uniformly in ω and p. So, for all ω, we can split the term in (4.21) as follows: We show that both terms are bounded by a constant times ν ε with ε ą small enough independent on ω. We notice that if k R F ω p , then wherek is the closest index to k inside F ω p ; hence, we can rearrange the (infinite) indexes k P Zzt0u into j P Zzt0u such that dˆω´j β α ppq ν , µI α;p˙ě |j| ν β α ppq . From the discussion above, and the estimate (4.16) it follows that The latter term is summable in j, Lemma 4.15 implies the summability in p as well, therefore the whole term goes to zero as ν N´1 . The last part follows from the observation below: if there exists k P F ω p , then Since µ is fixed and ν Ñ 0, we can assume µν ă 1; thus, there exists δ such that µν " 1´δ. We split our analysis in two cases, first we consider dˆω´k β α ppq ν , µI α;p˙ď δ 2ν β α ppq . Let s P µI α;p such that d pω, sq " d pω, µI α;p q then by triangular inequality d pω, µI α;p q " d pω, sq ě dˆω, ω´k β α ppq ν˙´dˆω´k Hence d pω, µI α;p q ą |k| β α ppq ν´β Then, On the other hand, when which goes to zero as ν N´1 . Using the inequalities, it is clear that in both cases since the sum is uniformly bounded with respect to ω, the above inequalities imply that (4.27) goes to zero as ν ε . The quantity p1`d pω, µI α;p qq N´1´ε is summable if N´1´ε ą 1. This is granted by the fact that N ą 2 and that we can chose ε small enough. 4.7. Proof of the main result. -}f } 2 2 By hypothesis, we know that Hence, there exists ν 0 ą 0 such that for all 0 ă ν ă ν 0 Then, the action of the frame operator can be bounded as follows , . Proof. The polynomial decay claimed in (4.29) is trivial, since ϕ is a Schwartz function. For the lower bound in (4.28), we argue as follows: for any ω P R there exists only one p such that ω P µI α;p . For the sake of simplicity, we assume p ą 0, the negative case follows with the same argument. Therefore, ÿ p |Φ µ p;α pωq| 2 ě |Φ µ p;α pωq| 2 . Since ω P µI α,p , then ω " µpi α;p`t q, 0 ď t ď β α ppq. Hence , because of the positivity of the Gaussian. The maximum value is reached when |t´j| is small. Our construction implies that there exists j such that |t´j| ď 1, thus |Φ µ p;α pωq| 2 ě which is independent on p and ω, as desired. We rewrite the sum above as follows: Finally, since ϕ belongs to the Wiener Space, we can write where }¨} W is the Wiener norm. We have used a well known property of Wiener space, see e.g. [20][Lemma 6.1.2]. Higher Dimensions We consider here the case α " 1 and an arbitrary dimension. We define a (parabolic) phase space tiling and, for suitable window functions, we provide a frame of L 2 pR d q. We follow the ideas of wave atoms proposed in [8,9] and subsequently adapted to the Gaussian case by [27]. For the sake of simplicity, we enlighten the notation used before by suppressing the parameter α. As for the dimension d " 1, we begin with the painless case using a smooth and compactly supported window function. Moreover, we can define an explicit conjugate frame that leads to a reconstruction formula. Then we generalize the construction. 5.1. Phase space partition. Define the Cartesian coronae C p as follows: ω " pω 1 , . . . , ω d q P R d : max 1ďiďd |ω s | P rβ ppq , β pp`1qq Each corona is further partitioned in (open) boxes of side β ppq " 2 p´1 , precisely where s "´2,´1, 0, 1 and max s"1...d`ls`1 2˘R C 0 , i.e. the centers are outside the inner corona. The indexes s label every possible box inside the corona. It can be easily checked that the number of such boxes (or multi-indexes) is 2 d p2 d´1 q, for every p ě 1. We also define, according to Section 3, see Figure 5. See Figure 6 as an example of frame element both in time and frequency. Figure 7 shows how the elements are localized in the set I p in frequency. Painless frame expansion. As for the case of dimension d " 1, we start with compactly supported window functions. We adapt the definition of admissible window. Using this definition we can immediately obtain some important properties of Φ µ p; . Lemma 5.3. Let pϕ, µq be admissible, then where C 1 , C 2 , C 3 are defined above (cf. 4.8) and Diam denotes the maximum distance between points of the set, i.e. the diameter. The proofs of these lemmata are analogous to Lemmata 4.6-4.9. Theorem 5.5. Consider pϕ, µq being admissible. Then there exists ν ą 0 such that the DOSTsystem M D pµ, ν, ϕq is a frame for L 2`Rd˘. Precisely, there exist A, B ą 0 such that for all f P L 2`Rd( Preparatory Lemmata. We recall the same result proved in dimension d " 1. In order to analyze the term in (5.14), notice that for each ω P R d there exist uniquep,l such that ω P I¯ p . Notice that we have (5.17) dˆω´k β ppq ν , µI p˙ą |k|ˆ1 ν´? dµ˙β ppq , @ω P I p . For each ω P R d , by (5.17) we have Our hypotheses grant that N´d ą d, then the above remarks implies that where the latter tends to 0 as ν Ñ 0 and the constants are uniformly bounded with the respect of ω. Hence the term in (5.14) has a limit vanishing as ν approaches zero. The term in (5.15), goes to zero as ν goes to zero as well. We notice that for each ω, there exist p,l such that ω P I¯ p . For the same reason for each p, l P`Nz tpu , Jz ¯ (˘, there exists k p,l such that w´k βppq ν P I p . Therefore, Then, applying equation (5.17), (5.12) and Lemma 5.9 as in (4.26) we can conclude that the term in (5.18) goes to zero as ν goes to zero independently on ω. One can split the term in (5.16) as follows: which goes to zero as ν goes to zero as desired. We stress that`1`d`ω, µI p˘˘N´d´ε is summable if N´d´ε ą d which is granted by Lemma 5.9. Since the bounds are all unifrom with the respect to ω, we can conclude that the terms in (5.16) goes to zero as ν does. Proof of the main result. Proof. From the d " 1 case, we get xS α;µ,ν ϕ,ϕ f, f y ď 1 Remark 5.11. As we did in dimension d " 1 (cf. Theorem 4.17), we can show that the normalized Gaussian fulfills the hypothesis of Theorem 5.7. Conclusions We constructed a frequency-adapted frame which covers Gabor and Stockwell-related frames. Our setting includes also general α-phase-space partitioning. This approach appears natural to describe α-Modulation spaces and this will be subject of a future work. In [4], the author prove that the DOST basis is able to diagonalize the S-transform with a suitable window function which is essentially a boxcar window in the frequency domain and that the evaluation of the DOST-coefficients turns out to be the evaluation of the S-transform with this particular window in a suitable lattice. The natural question is if the α-DOST bases introduced in this paper have the same property, clearly not with respect to the S-transform, but in relation to another transform, similar to the flexible Gabor-wavelet transform (or α-transform), see e.g. [16] for the the definition. The n-dimensional case considered in Section 5 is restricted to the case α " 1, hence a suitable phase-space partitioning is yet to be defined for α P r0, 1q and will be part of our future studies. This issue has been already analyzed in the two dimensional case by N. Morten in [25] using the theory of Brushlets. From a computational stand point, we aim to implement and compare our results with existent methods. We are interested in testing in various applications such as medical and seismic imaging and also general image processing. As pointed out in the introduction, we remark that our approach consider the n-dimensional case in a peculiar way: instead of applying the one dimensional DOST in each direction sequentially, we provide a native n-dimensional setting. This approach is similar to the Wavepackets and Curvelets one, see [8,27]. A natural question arises: is the density of our frames comparable with the Gabor case? For instance, is it true that if the volume of the lattice is strictly lower than 1, the Gaussian leads to a (a) Time view, α " 1, p " 3 (b) Frequency view, α " 1, p " 3 (c) Time view, α " 1, p " 5 (d) Frequency view, α " 1, p " 5 Figure 6. Time and frequency outlook of two window functions. We observe heavy decay in time while in frequency we localize around a certain frequency. The window ϕ is in both cases a normalized Gaussian. frame? And is this condition independent on α? Figure 7. Two window functions for α " 1 and p " 5, 7. We see how the normalization β ppq´1 and the width of I p affects the shape of the windows. The window ϕ is, in both cases, a normalized Gaussian.
2015-10-09T12:33:44.000Z
2015-09-04T00:00:00.000
{ "year": 2015, "sha1": "ad540d40e291c420fbace7e39a5b74f0654d0f3a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ad540d40e291c420fbace7e39a5b74f0654d0f3a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
58666864
pes2o/s2orc
v3-fos-license
Segmented poly(A) tails significantly reduce recombination of plasmid DNA without affecting mRNA translation efficiency or half-life Extensive research in the past decade has brought mRNA closer to the clinical realization of its therapeutic potential. One common structural feature for all cellular messenger RNAs is a poly(A) tail, which can either be brought in cotranscriptionally via the DNA template (plasmid- or PCR-based) or added to the mRNA in a post-transcriptional enzymatic process. Plasmids containing poly(A) regions recombine in E. coli, resulting in extensive shortening of the poly(A) tail. Using a segmented poly(A) approach, we could significantly reduce recombination of plasmids in E. coli without any negative effect on mRNA half-life and protein expression. This effect was independent of the coding sequence. A segmented poly(A) tail is characterized in that it consists of at least two A-containing elements, each defined as a nucleotide sequence consisting of 40–60 adenosines, separated by a spacer element of different length. Furthermore, reducing the spacer length between the poly(A) segments resulted in higher translation efficiencies compared to homogeneous poly(A) tail and reduced recombination (depending upon the choice of spacer nucleotide). Our results demonstrate the superior potential of segmented poly(A) tails compared to the conventionally used homogeneous poly(A) tails with respect to recombination of the plasmids and the resulting mRNA performance (half-life and translational efficiency). INTRODUCTION The past decade has witnessed the emergence and rapid application of in vitro transcribed messenger RNA (mRNA) as a therapeutic molecule. Compared to classical gene therapy with DNA-based vectors, use of mRNA offers several advantages, such as transient expression (Kapeli and Yeo 2012;Lodish 2012), lack of necessity to enter the nucleus, and no risk of chromosomal integration (Cannon and Weissman 2002;Hacein-Bey-Abina et al. 2003;Bernal 2013). After overcoming the intrinsic limitations of the mRNA, namely immunogenicity and instability, using chemically modified nucleotides (Karikó et al. 2005(Karikó et al. , 2008Kormann et al. 2011), researchers have significantly improved the mRNA molecule by optimizing the structural elements, namely cap (Grudzien-Nogalska et al. 2013;Ziemniak et al. 2013;Kowalska et al. 2014), 5 ′ -and 3 ′ -UTRs (Ferizi et al. 2016;Schrom et al. 2017;Trepotec et al. 2018), and coding sequence(s) (Thess et al. 2015;Schrom et al. 2017). Increasing its stability and translational yields has led to a progress from the first use of encoding a potentially therapeutic protein (Wolff et al. 1990) to preclinical (Bahl et al. 2017;Richner et al. 2017) and clinical applications (Mullard 2016). Regardless of the target protein or tissue, all cellular protein encoding RNAs with some exceptions (e.g., histones) share a common structural feature, that is, poly(A) tail. Inside the cell nucleus, the poly(A) tail is added to the mRNA in a post-transcriptional manner downstream from the gene-encoded polyadenylation signal (AATAAA). The poly(A) tail is essential for the stability (Sachs 1990;Oliveira and McCarthy 1995) and translation (Sachs 1990;Wells et al. 1998) of the mRNA. The poly(A) tail in the mRNA is recognized by poly(A)-binding protein (PABP) which in turn interacts with eIF4G of the translation initiation complex, thereby forming a closed loop (Mangus et al. 2003;Goldstrohm and Wickens 2008) and the resulting messenger ribonucleoprotein particle. In the case of in vitro-transcribed mRNA, the poly(A) tail can be either encoded into the DNA template (PCR product-or plasmid-based) or added enzymatically to the mRNA in a separate step after in vitro transcription. Each of the abovementioned approaches has its own set of limitations. While PCR offers the ease of high throughput and is widely used for small-scale mRNA production (up to a few hundred milligrams), high production costs, and risk of mutagenesis during PCR amplification (compared to plasmid production in bacteria) limit its usefulness for large-scale production (several grams). Plasmid production on the other hand is well established, can be performed under GMP conditions, has lower production costs and risks of mutations (in the coding sequence) when compared to a PCR-based approach. However, plasmid DNA encoded homopolymeric stretches [e.g., poly (A)] recombine during bacterial amplification of the plasmid DNA. Previous studies reported generation of spontaneous deletion mutants during amplification of plasmids starting with ∼100 bp of poly(dA:dT) sequences (Preiss et al. 1998). For longer poly(A)s, for example, poly(A) 150 , the instability is too high to allow isolation of any single positive clone (Grier et al. 2016). Despite this limitation, template-encoded poly(A) offers certain advantages over enzymatic post-polyadenylation of mRNA, such as defined and reproducible poly(A) length resulting in a homogenous product (Holtkamp et al. 2006). Although enzymatic post-polyadenylation of mRNA warrants sufficiently long poly(A) tails (Cao and Sarkar 1992;Martin and Keller 1998), the composition of the final product due to different poly(A) lengths is difficult to control and therefore might not meet regulatory requirements (Weissman 2015). In addition to the above listed advantages of cotranscriptional polyadenylation, the reduced number of steps in RNA production are likely to translate to lower production costs. Moreover, enzymatic polyadenylation of mRNA needs to be carried out under alkaline conditions, as the enzyme poly(A) polymerase has highest activity at pH >7.5. mRNA is highly susceptible to alkaline hydrolysis which in turn results in poorer mRNA quality, especially with longer transcripts (>3 kb). (Voet and Voet 2011) As an alternative to circular plasmids, Grier et al. (2016) have proposed the use of a linear plasmid system, pEVL, which allows stable cloning of poly(A)s of up to 500 bp. A similar linear vector-based system (pJAZZ) is commercially available from Lucigen but suffers from limitations of large vector size (>12 kb), limited choice for cloning enzymes (available only as either SmaI or NotI predigested vector) and is a very low copy vector. Having a plasmid template-encoded poly(A) tail which is not prone to recombination but still supports mRNA stability and translational efficiency in comparable manner to a conventional natural poly(A) would be an ideal solution to the abovementioned limitations. The main aim of the present study was to investigate if segmentation of the poly(A) tail could reduce recombination of a high copy plasmid vector in E. coli. For this, the most widely used but relatively unstable poly(A) tail of ∼120 A's [poly(A) 120 ] was split into either two or three segments of 40A's [poly(A) 3 × 40 ] or 60A's [poly(A) 2 × 60 ], respectively. The segmentation scheme was designed keeping in mind the functional "PABP footprint" on mRNA. While PABP requires a minimum of 12 adenosines to bind, protein oligomers can bind to the same poly(A) stretch, thereby forming a repeating unit of ∼27-30 nt Kornberg 1980, 1983;Wang et al. 1999). Moreover, the work by Preiss et al. showed that a single PABP molecule bound to mRNA, while interacting with the 5 ′ cap structure was not sufficient for promoting translation (Preiss et al. 1998). Based on these data, our constructs were designed to enable at least one oligomeric stretch of PABP per segment (30 nt). Besides recombination, translation efficiency and mRNA half-life measurements were made to compare the effect on segmentation on these critical attributes of the molecule. Here too, poly (A) 120 was used as a benchmark as it has been shown in previous studies to result in high protein expression (Holtkamp et al. 2006;Bangel-Ruland et al. 2013). We show that segmentation of the poly(A) tail, does not negatively affect translational yield and mRNA half-life, but eases the technical difficulties connected with recombination of homopolymeric poly(A) stretches in plasmid vectors. RESULTS AND DISCUSSION The current study on segmentation of poly(A) into smaller fragments separated by spacer elements was prompted by technical challenges often met while designing and producing plasmid DNA templates for use in in vitro transcription to produce mRNA for use as transcript therapies. Design of modified/segmented poly(A) tails In order to produce recombinant RNA transcripts with segmented poly(A) tails, the corresponding DNA sequences were cloned into a plasmid vector downstream from the gene of interest (GOI). Figure 1 schematically shows the composition of different poly(A) tails and their spacer separators. The most conventionally used standard poly(A) tail in plasmid vectors (Holtkamp et al. 2006;Kormann et al. 2011;Vallazza et al. 2015;Balmayor et al. 2016Balmayor et al. , 2017Ferizi et al. 2016) containing ∼120 A's [poly(A) 120 ] was split either into three segments, each comprising 40 A's separated by a NsiI restriction site of 6 nt [poly(A) 3 × 40_6 ] or Trepotec et al. into two equal segments of 60 A's, also separated by 6 nt [poly(A) 2 × 60_6 ]. This way, we could control the size of each segment and still have a physical separator among adenosines. According to previous literature Kornberg 1980, 1983;Wang et al. 1999), a minimum of 12 adenosines are needed for the binding of a single PABP molecule. However, a single PABP bound to poly(A) is not enough to support translation, even though it can interact with the eIF4 complex (Preiss et al. 1998). The segments of poly(A) 3 × 40_6 and poly(A) 2 × 60_6 are long enough to ensure binding of more than three copies of PABP per segment. In order to investigate the role of the spacer length between the two A60 segments, besides the one separated by 6 nt, five additional constructs were synthesized with a spacer length of either 12 [poly(A) 2 × 60_12 ], 24 [poly(A) 2 × 60_24 ] nt, or 1 nt [poly(A) 2 × 60_C , poly(A) 2 × 60_G , poly(A) 2 × 60_T ]. Segmented poly(A) tails reduce recombination of poly(A)-containing plasmids in E. coli Instability of poly(A)-containing plasmids in E. coli has been previously reported (Kühn and Wahle 2004;Godiska et al. 2009) and is a major risk of failure when using such poly(A)-containing plasmids for large-scale mRNA production. We examined whether the use of segmented poly(A) affected the recombination efficiency of plasmids post-transformation into E. coli. To test this, coding regions for different proteins (d2EGFP, luciferase, and hEPO) were cloned upstream of these poly(A) formats [poly(A) 120 , poly(A) 2 × 60_6 , poly(A) 3 × 40_6 ] into a pUC57-Kanamycin (GenScript) vector. Post-transformation into E. coli, clones were screened for insert and positive clones (containing the desired insert) were additionally screened for the length of the poly(A) region. For each of the poly (A) formats, the poly(A) region was digested with restriction enzymes and the digestions were resolved on Fragment Analyzer (capillary gel electrophoresis) to measure the size of the poly(A) fragment. As expected, recombination in the poly(A) region was observed for more than 50% of the clones containing a homologous poly(A), poly(A) 120 . The proportion of recombination observed with the poly (A) 120 format was sequence-independent and comparable to the values reported by Grier et al. (2016). By splitting the poly(A) into either poly(A) 3 × 40 or poly(A) 2 × 60 , recombination in E. coli could be reduced with most stable clones (<20% recombination) obtained with plasmids containing poly(A) 2 × 60_6 (Fig. 2). This trend was observed for all the tested sequences indicating this reduction in recombination to be sequence-independent. Effects of poly(A) segmentation on mRNA productivity Encouraging results of reduced recombination prompted us to investigate the performance of our segmented poly (A) tails with respect to mRNA stability and expression. For this, the coding region of d2EGFP (destabilized EGFP with a relatively short protein half-life) was cloned into our different poly(A)-containing vectors and chemically modified mRNA (modification 1) was produced using previously described protocols (Kormann et al. 2011;Trepotec et al. 2018). The resulting mRNAs were transfected into A549 cells and at different time points post-transfection (4, 24, 48, and 72 h), both d2EGFP protein and mRNA were quantified using FACS and realtime reverse transcriptase PCR, respectively (Fig. 3). Comparable levels of d2EGFP protein were observed for segmented poly(A) constructs compared to the control A120 at all four time points. Comparable levels of d2EGFP mRNA were observed for all poly(A) formats, except at 24-h post-transfection, where lower mRNA amounts were quantified for the segmented poly(A) formats. Similar to our previously published work (Ferizi et al. 2016), we calculated the mRNA productivity, defined as the amount of protein (d2EGFP median fluorescence intensity) normalized to the amount of mRNA (quantified via qPCR) for the three poly(A) formats. Surprisingly, higher mRNA productivities were observed for segmented poly(A) constructs at earlier time points (4 and 24 h) compared to A120 format. Similar to the experiments with d2EGFP, luciferase protein and mRNA quantification was investigated in A549 cells at 24-h post-transfection with luciferase-encoding mRNA, containing either of the poly(A) formats [poly(A) 2 × 60_6 , poly(A) 3 × 40_6 vs. poly(A) 120 ]. Furthermore, to address the effect of nucleotide modifications, luciferase mRNAs were produced using either unmodified nucleotides, modification 1 or modification 2 nucleotides. These modifications have been described previously (Trepotec et al. 2018). Briefly, modification 1 included (B) d2EGFP mRNA quantification in A549 cells. (C ) mRNA productivity was calculated by dividing the mean MFI (FACS data; A) by the mRNA amounts (real-time PCR data; B) and normalizing these ratios to those observed with poly(A) 120 construct. Values represent mean ± SD of three replicates. Statistical significance was assessed by two-way ANOVA test with P-values: ( * ) P < 0.5, ( * * ) P < 0.01, ( * * * ) P < 0.001, ( * * * * ) P < 0.0001. 25% 5-methylcytidine and 25% 2-thiouridine. As for the modification set 2, 35% 5-iodouridine and 7.5% 5-iodocytidine were used in the IVT reaction for RNA production. As an additional benchmark, a previously published construct (Thess et al. 2015) comprising homopolymeric A stretch (A 63 ), homopolymeric C stretch (C 31 ) and histone stemloop (ACH; Fig. 1) was used. As histone mRNAs lack poly (A) tails, the functions of poly(A), that is, stability and translation efficiency are performed by the conserved stemloop (Williams and Marzluff 1995;Zanier et al. 2002). Use of segmented poly(A) 2 × 60_6 construct significantly increased protein levels post-transfection in a modificationindependent manner when compared to poly(A) 120 and ACH benchmarks (Fig. 4). No drastic differences were observed between the mRNA amounts for the different poly (A) format containing luciferase mRNAs across modifications. Poly(A) 2 × 60_6 construct was more productive than any other poly(A) format when using modification sets 1 or 2. The construct with hairpin structures (ACH) expressed significantly less amounts of luciferase compared to all other constructs containing 120 adenosines despite these three constructs being present in the cell in substantial amounts (qPCR data). The reduced translation efficiency of hairpin-containing constructs is further confirmed by calculating the mRNA productivities. Effects of poly(A) segmentation on translation of physiological targets The initial results of reduced recombination and comparable/higher mRNA productivity compared to poly(A) 120 , with intracellular reporter proteins (d2EGFP and luciferase), prompted us to further test the poly(A) 2 × 60_6 format with additional physiological targets. The selected targets varied in the length of their mRNAs and cellular localization of the protein: human erythropoietin (0.9 kb) as a prototype of a secretory protein and human cystic fibrosis transmembrane conductance regulator (CFTR; 4.5 kb) as a prototype of a membrane protein. The codon optimized sequence encoding hEPO was cloned into pUC57-Kanamycin vector upstream of either poly(A) 120 or poly(A) 2 × 60_6 . mRNA was produced for each of the constructs using either unmodified, modification 1 or modification 2 sets of nucleotides. Since EPO is primarily secreted by kidney cells, transfection experiments, at two doses, were performed in human HEK293 cells. Protein concentrations were determined via ELISA at 24-, 48-, and 72-h post-transfection (Fig. 5). With some exceptions (e.g., unmodified RNA at 24 h and 72 h and modification 1 at 72 h), no significant differences were observed between the compared poly(A) To further investigate the relationship between physiological gene expression and poly(A) tail segmentation, we focused on mRNA constructs encoding human CFTR furnished with either poly(A) 2×60_6 or poly(A) 120 . Both constructs were produced with unmodified set of nucleotides, and transfection experiments with CFTR mRNA were performed in 16HBE14o-cells. Only unmodified CFTR mRNA was used, as a previous study (Bangel-Ruland et al. 2013) has demonstrated functional restoration of CFTR in human CF airway epithelia after transfection with unmodified CFTR mRNA containing a poly(A) tail of 120A's. At 24 h and 48 h post-transfection, cells were lysed and western blot was performed for the CFTR protein. Hsp90 was used as a housekeeper. Similar to our previous results with d2EGFP, luciferase and hEPO, use of segmented poly (A) 2×60_6 did not negatively affect the resulting protein amounts post-transfection when compared to the conventionally used poly(A) 120 (Fig. 6). Spacer region expansion in poly(A) 2×60 Reduced recombination with segmented poly(A) 2×60_6 with comparable (d2EGFP, EPO, CFTR) or higher (luciferase) translation without significant effects on mRNA stabil-ity prompted us to further investigate the spacer length of this specific poly(A) format. For the ease of experimental feasibility, two new luciferase constructs were made with longer spacers [12 and 24 nt: construct poly(A) 2 × 60_12 and poly(A) 2 × 60_24 in Figure 1]. The different luciferase mRNAs (unmodified, modification 1 and modification 2) for the three poly(A) formats [poly(A) 2 × 60_6 , poly(A) 2 × 60_12 , and poly(A) 2 × 60_24 ] were transfected into A549 cells. At 24 h post-transfection, significantly lower luciferase expression was observed with longer spacers using unmodified and modification 1 containing mRNA (Fig. 7). These modification-specific effects could be due to the spacer region, upon incorporation of chemically modified nucleotides affecting the binding of PABP to the two segments of poly(A). With two exceptions [poly(A) 2 × 60_24 unmodified and poly(A) 2 × 60_6 modification 1], comparable levels of luciferase mRNA could be quantified in the cells. Therefore, increasing the spacer length to more than 6 nt in segmented poly(A) 2 × 60_6 tail did not result in any significant advantage, neither in translation nor in mRNA stability. Spacer region reduction in poly(A) 2 × 60 The next set of experiments was addressed to examining the effect of reducing the spacer length to a single nucleotide in poly(A) 2 × 60 segmented poly(A) tail on protein Values represent mean ± SD of three replicates. Statistical significance was assessed by two-way ANOVA test with P-values: ( * * ) P < 0.01, ( * * * ) P < 0.001, n = 3. expression and mRNA productivity. All three possible constructs (with C, T, or G as a spacer) were made. For each construct, unmodified, modification 1 and modification 2 containing mRNAs were produced. The different luciferase mRNAs were compared in A549 cells. As a benchmark, standard poly(A) 120 was used. Independent of the used spacer nucleotide/modifications, all three segmented poly(A) constructs resulted in significantly higher luciferase expression when compared to poly(A) 120 (Fig. 8). With a few exceptions, mRNA levels were comparable for the different luciferase mRNAs. Irrespective of the spacer nucleotide, segmented poly(A) constructs were more productive than the standard poly(A) 120. Among the three spacer nucleotides, no significant differences could be observed. Since the mRNA stability was not affected, it is very likely that segmented poly(A) with a single nucleotide spacer augmented translation. Besides increasing translation, incorporation of a single G as a spacer further reduced recombination from 20% [as observed with poly(A) 2 × 60_6 ] to zero (Fig. 9). A spacer with a single nucleotide of T recombined in 10% of cases, and the one with a C as a spacer nucleotide recombined in 50% of cases, which in turn is comparable to recombination observed with A120 (Fig. 2). Identification of the mechanisms underlying the observed reduced recombination and enhanced translation with segmented poly(A) 2 × 60_1 compared to classical poly(A) 120 will be the subject of future studies. These results allow us to recommend a segmented poly (A) region [poly(A) 2 × 60 ] with either a 6 or a single nucleotide (G/T) spacer for use in plasmid-based vectors for RNA production. Using such a segmented poly(A) did not have any negative effect on protein expression and mRNA half-life but reduced recombination of plasmids in E. coli. Plasmid preparation The synthetic poly(A) sequences were introduced to the vector backbone either as annealed complementary oligonucleotides or fragments created by PCR (Table 1). For sequences comprising of 2 × 60, 3 × 40, and ACH, specific sets of complementary oligonucleotides were synthesized and annealed. The synthetic poly (A) fragments of A120, 2 × 60_1, 2 × 60_12, and 2 × 60_24 were created by PCR. Annealing of complementary oligonucleotides was performed as follows: 100 µM of each oligonucleotide were mixed with 40 µL annealing buffer (10 mM Tris-HCl, 50 mM NaCl, 1 mM EDTA, pH 7.5) and incubated for 5 min at 95°C. Subsequently, the mixture was let to cool down to room temperature before proceeding with restriction digestion (BglII-BstBI). For the high performance of PCR reaction, Phusion High-fidelity PCR master mix (Thermo Fisher Scientific) was used. To the mastermix, which contains 2× Phusion DNA Polymerase, nucleotides and optimized reaction buffer including MgCl 2 , 0.5 µM of forward and reverse primer, 3% DMSO, and 1 ng of template DNA were added to the reaction. The total volume of 25 µL per reaction was initially denatured at 98°C for 30 sec, following by 30 cycles at 98°C for 10 sec, annealing at 72°C for 30 sec, and extension at 72°C for 30 sec/kb. The final extension was performed at 72°C for 10 min. The size of the PCR product was confirmed on 1% agarose gel and the desired band was purified using NucleoSpin Gel and PCR clean-up kit (Macherey Nagel). Purified PCR product was digested with NheI-BstBI and stored at −20°C till further use. Digested products of annealed oligonucleotides and PCR products were cloned into accordingly digested pUC57-Kana vector (GenScript) containing the desired coding sequences (firefly luciferase, d2EGFP, human EPO, and human CFTR). Generation of mRNA To generate in vitro transcribed mRNA, plasmids were linearized by BstBI (Thermo Fisher Scientific) digestion and purified by chloroform extraction and ethanol precipitation. Purified linear plasmids were used as a template for in vitro transcription. Plasmid templates (0.5 µg/µL) were subjected to in vitro transcription using 3 U/µL T7 RNA polymerase (Thermo Fisher Scientific), transcription buffer II (Ethris GmbH), 1 U/µL RiboLock Rnase inhibitor (Thermo Fisher Scientific), 0.015 U/µL inorganic pyrophosphatase 1 (Thermo Fisher Scientific) with a defined choice of natural and chemically modified ribonucleotides (Jena Biosciences). Segmented poly(A) reduce recombination of plasmids www.rnajournal.org 513 The modification set 1 was synthetized using 5-methylcytidine (25%) and 2-thiouridine (25%), in addition to unmodified nucleotides. For modification set 2, instead of 5-methylcytidine (25%) and 2-thiouridine (25%), 5-iodouridine (35%), and 5-iodocytidine (7.5%) were used. The complete IVT-mix was incubated at 37°C for 2 h. Afterwards, 0.01 U/µL DNase I (Thermo Fisher Scientific) was added for an additional 45 min at 37°C to remove the plasmid template. RNA was precipitated with ammonium acetate at a final concentration of 2.5 mM, followed by two washing steps with 70% ethanol. The pellet was re-suspended in aqua ad injectabilia. A C1-m7G cap structure was added enzymatically by 0.5 mM Vaccinia Virus Capping Enzyme (New England Biolabs) to the 5 ′ end of the previously denatured transcript (1 mg/mL) at 80°C for 5 min. The capping reaction mix also contained 1× capping buffer (New England Biolabs), 0.5 mM GTP (New England Biolabs), 0.2 mM S-methyladenosine (New England Biolabs), 2.5 U/µL mRNA Cap 2 ′ -O-Methyltransferase (New England Biolabs), and 1 U/µL RiboLock RNase Inhibitor (Thermo Fisher Scientific). The capping mixture was incubated for 60 min at 37°C, followed by RNA precipitation with ammonium acetate at a final concentration of 2.5 mM and two washing steps with 70% ethanol. The pellet was re-suspended in aqua ad injectabilia. RNA quality and concentration were measured spectrophotometrically on a NanoDrop2000C (Thermo Fisher Scientific). Its correct size and purity were determined via automated capillary electrophoresis (Fragment Analyzer, Advanced Analytical). In vitro transfection A549 and HEK293 cells were seeded at the density of 2 × 10 4 cells/well and 4 × 10 4 cells/well, respectively, in a 96-well plate, for the purpose of firefly luciferase, FACS measurements and EPO ELISA assay. 16HBE14o-cells were seeded in a 6-well plate at the density of 7.5 × 10 5 cells/well, for the purpose of western blot analysis. At 24-h post-seeding, cells were transfected using the commercial transfection reagent Lipofectamine 2000 and normalizing these ratios to those observed with poly(A) 120 construct. Values represent mean ± SD of six replicates. Statistical significance was assessed by two-way ANOVA test with P-values: ( * ) P < 0.5, ( * * ) P < 0.01, ( * * * * ) P < 0.0001. (Thermo Fischer Scientific). Complexes were prepared at a ratio of 2 µL Lipofectamine 2000 per 1 µg mRNA. A549 and HEK293 cells were transfected with 250 ng/well and 250 and 125 ng/well mRNA, respectively. For experiments in A549 and HEK293 cells, required amounts of mRNA were diluted in water and the needed amounts of Lipofectamine 2000 in serum-free MEM. mRNA was added to the Lipofectamine 2000 solution followed by 20 min incubation at RT. The concentration of the final mRNA/ Lipofectamine 2000 solution was 25 ng/µL. Ten microliters of the complex solution was added to the cells and cells were incubated for 24 h. For every mRNA construct, replicates of three or six were prepared. For 16HBE14o-cells, Lipofectamine MessengerMax was used due to its superior transfection efficiency (data not shown). For transfection, 7.5 µg mRNA was diluted in 125 µL water, and 11.25 µL Lipofectamine MessengerMax separately in 125 µL serum-free MEM. The mRNA solution was added to the Lipofectamine MessengerMax solution followed by 5 min incubation time at RT. A total volume of 250 µL of the lipoplex solution was added to the cells containing 2 mL normal growth media. The media was changed 4 h after transfection. Flow cytometry analysis for d2EGFP Cells were washed with PBS, detached with TrypLE (Gibco/Life Technologies), and re-suspended in flow cytometry buffer (PBS supplemented with 10% FBS). Shortly before measurement, cells were stained with propidium iodide for discrimination between live and dead cells (1 µg/mL; Sigma Aldrich). Analysis was performed on an Attune Acoustic Focusing Cytometer (Life Techologies) with Attune Cytometric Software (version 2.1; Life Technologies) and FlowJo (version 10). Firefly luciferase assay For detection of firefly luciferase activity, the assay was performed 24 h post-transfection. Cells were washed with PBS, followed by and normalizing these ratios to those observed with poly(A) 120 construct. Values represent mean ± SD of six replicates. Statistical significance was assessed by two-way ANOVA test with P-values: ( * ) P < 0.5, ( * * * * ) P < 0.0001. addition of 100 µL lysis buffer (25 mM Tris-HCl, 0.1% TritonX-100, pH 7.4). Cells were shaken for 20 min at room temperature. After lysis, 50 µL of the cell lysate was used to measure luciferase activity via photon luminescence emission for 5 sec using InfiniteR 200 PRO (Tecan). The protein amount in each sample was quantified in 5 µL of the cell lysate with Bio-Rad protein assay (Bio-Rad), using bovine serum albumin as a standard. Luciferase values were normalized to the protein concentrations. Enzyme-linked immunosorbent assay for hEPO Quantification of hEPO protein in cell supernatants was performed using human Erythropoietin Quantikine IVD ELISA kit (R&D Systems) following manufacturer's instructions. RNA isolation and reverse transcription RNA was isolated at different time points post-transfection using Single Shot Cell Lysis kit (Bio-Rad) following manufacturer's protocol. Prior to RNA extraction, the cell culture media was removed and cells were washed twice with PBS before being lysed in respective RNA isolation buffer. From the lysates (1 µg of RNA), cDNA was synthesized using iScript Select cDNA Synthesis kit (Bio-Rad) with oligo(dT) primers following manufacturer's instructions. The synthesized cDNA was stored at −20°C. Statistical analysis Each experiment was performed with at least three technical replicates per sample. Results are shown as means ± SD unless otherwise stated. Statistical analysis was performed using GraphPad Prism software (version 6). Data were tested for normal distribution using D'Agostino-Pearson omnibus normality test. Multiple comparisons were conducted by two-way ANOVA, followed by Sidak's test (pairwise comparison) or Dunnett's test (many-toone comparison). A P-value ≤ 0.05 was considered statistically significant.
2019-01-22T22:33:29.528Z
2019-01-15T00:00:00.000
{ "year": 2019, "sha1": "b0737c906eedbafcf6a4e716d98d43153eebc930", "oa_license": "CCBYNC", "oa_url": "http://rnajournal.cshlp.org/content/25/4/507.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6eb4a269e050a3dcc98c1b54752f087eead57e5d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
7864816
pes2o/s2orc
v3-fos-license
Extended finite operator calculus as an example of algebraization of analysis A wardian calculus of sequences started almost seventy years ago constitutes the general scheme for extensions of the classical umbral operator calculus considered by many afterwards . At the same time this calculus is an example of the algebraization of the analysis here restricted to the algebra of formal series. This is a review article based on the recent first author contributions. As the survey article it is supplemented by the short indicatory glossaries of notation and terms used by prominent contributors to the domain. q-calculus -has its own characteristic properties not pertaining to the standard case of Rota calculus realization. Nevertheless the overall picture and system of statements depending only on GHW algebra is the same modulo some automatic replacements in formulas demonstrated in the sequel. The large part of that kind of job was already done in [1,3,35]. The aim of this presentation is to give a general picture ( see: Section 3) of the algebra of linear operators on polynomial algebra. The picture that emerges discloses the fact that any ψ-representation of finite operator calculus or equivalently -any ψ-representation of GHW algebra makes up an example of the algebraization of the analysis with generalized differential operators [12] acting on the algebra of polynomials. We shall delimit all our considerations to the algebra P of polynomials or sometimes to the algebra of formal series. Therefore the distinction between difference and differentiation operators disappears. All linear operators on P are both difference and differentiation operators if the degree of differentiation or difference operator is unlimited. If all this is extended to Markowsky Q-umbral calculus [12] then many of the results of ψ-calculus may be extended to Q-umbral calculus [12]. This is achieved under the almost automatic replacement of {D,x, id} generators of GHW or their ψ-representation {∂ ψ ,x ψ , id} by their Q-representation correspondents {Q,x Q , id} -see definition 2.5. Primary definitions, notation and general observations In the following we shall consider the algebra P of polynomials P =F[x] over the field F of characteristic zero. All operators or functionals studied here are to be understood as linear operators on P . It shall be easy to see that they are always well defined. Throughout the note while saying "polynomial sequence {p n } ∞ 0 " we mean deg p n = n; n ≥ 0 and we adopt also the convention that deg p n < 0 iff p n ≡ 0. Then (note that for admissible ψ, 0 ψ = 0) Definition 2.1. Let ψ be admissible. Let ∂ ψ be the linear operator lowering degree of polynomials by one defined according to ∂ ψ x n = n ψ x n−1 ; n ≥ 0. Then ∂ ψ is called the ψ-derivative. is called the generalized translation operator. Polynomial sequences of ψ-binomial type [3,4,1] are known to correspond in one-to-one manner to special generalized differential operators Q, namely to those Q = Q (∂ ψ ) which are ∂ ψ -shift invariant operators [3,4,1]. We shall deal in this note mostly with this special case,i.e. with ψ-umbral calculus. However before to proceed let us supply a basic information referring to this general case of Q-umbral calculus. Right from the above definitions we infer that the following holds. If {q k } k ≥ 2 and an admissible ψ exist then these are unique. Notation 2.1. In the case (2.2) is true we shall write : Q = Q (∂ ψ ) because then and only then the generalized differential operator Q is a series in powers of ∂ ψ . Remark 2.2. Note that operators of the (2.1) form constitute a group under superposition of formal power series (compare with the formula (S) in [13]). Of course not all generalized difference-tial operators satisfy (2.1) i.e. are series just only in corresponding ψ-derivative ∂ ψ (see Proposition 3.1 ). For example [15] let Q = 1 2 DxD − 1 3 D 3 . Then Qx n = 1 2 n 2 x n−1 − 1 3 n 3 x n−3 so according to Observation 2.1 n ψ = 1 2 n 2 and there exists no admissible ψ such that Q = Q (∂ ψ ).Herex denotes the operator of multiplication by x while n k is a special case of n k ψ for the choice n ψ = n. Observation 2.2. From theorem 3.1 in [12] we infer that generalized differential operators give rise to subalgebras Q of linear maps (plus zero map of course) commuting with a given generalized difference-tial operator Q. The intersection of two different algebras Q 1 and Q 2 is just zero map added. The importance of the above Observation 2.2 as well as the definition below may be further fully appreciated in the context of the Theorem 2.1 and the Proposition 3.1 to come. Definition 2.5. Let {p n } n≥0 be the normal polynomial sequence [12] ,i.e. p 0 (x) = 1 and p n (0) = 0 ; n ≥ 1. Then we call it the ψ-basic sequence of the generalized difference-tial operator Q if in addition Q p n = n ψ p n−1 . In parallel we define a linear mapx Q : P → P such thatx Q p n = (n+1) (n+1) ψ p n+1 ; n ≥ 0. We call the operatorx Q the dual to Q operator. Of course [Q,x Q ]= id therefore {Q,x Q , id} provide us with a continuous family of generators of GHW in -as we call it -Q-representation of Graves-Heisenberg-Weyl algebra. In the following we shall restrict to special case of generalized differential operators Q, namely to those Q = Q (∂ ψ ) which are ∂ ψ -shift invariant operators [3, 4, 1] (see: Definition 2.6). At first let us start with appropriate ψ-Leibnitz rules for corresponding ψderivatives. ψ-Leibnitz rules: It is easy to see that the following hold for any formal series f and g: where -note -R qQ x n−1 = n R x n−1 ; (n ψ = n R = n R(q) = R (q n )) and finally for ∂ ψ =n ψ ∂ 0 : Here R(z) is any formal Laurent series; Qf (x) = f (qx) and n ψ = R(q n ). ∂ 0 is q = 0 Jackson derivative which as a matter of fact -being a difference operator is the differential operator of infinite order at the same time: The equivalent to (2.3) form of Bernoulli-Taylor expansion one may find [16] in Acta Eruditorum from November 1694 under the name "series univeralissima". (Taylor's expansion was presented in his "Methodus incrementorum directa et inversa" in 1715 -edited in London). Definition 2.6. Let us denote by End(P ) the algebra of all linear operators acting on the algebra P of polynomials. Let Then ψ is a commutative subalgebra of End(P ) of F-linear operators. We shall call these operators T : ∂ ψ -shift invariant operators. We are now in a position to define further basic objects of "ψ-umbral calculus" [3,4,1]. The strictly related notion is that of the ∂ ψ -basic polynomial sequence: Identification 2.1. It is easy to see that the following identification takes place: Of course not every generalized differential operator might be considered to be such. and Φ is the unique solution of this eigenvalue problem. If in addition (2.2) is satisfied then there exists such an admissible sequence ϕ that Φ (x; λ) = exp ϕ {λx} (see Example 3.1). The notation and naming established by Definitions 2.7 and 2.8 serve the target to preserve and to broaden simplicity of Rota's finite operator calculus also in its extended "ψ-umbral calculus" case [3,4,1]. As a matter of illustration of such notation efficiency let us quote after [3] the important Theorem 2.1 which might be proved using the fact that ∀ Q (∂ ψ ) ∃! invertible S ∈ Σ ψ such that Q (∂ ψ ) = ∂ ψ S. ( For Theorem 2.1 see also Theorem 4.3. in [12], which holds for operators, introduced by the Definition 2.5). Let us define at first what follows. where the linear mapx ψ : P → P ; is defined in the basis {x n } n≥0 as followŝ Then the following theorem is true [3] Theorem 2.1. (ψ-Lagrange and ψ-Rodrigues formulas [34,11,12,23,3]) For the proof one uses typical properties of the Pincherle ψ-derivative [3].Because ∂ ψ ' = id we arrive at the simple and crucial observation. One derives the above ψ-Leibnitz rule from ψ-Heisenberg-Weyl exponential commutation rules exactly the same way as in {D,x, id} GHW representation -(compare with 2.2.1 Proposition in [18] ). ψ-Heisenberg-Weyl exponential commutation relations read: To this end let us introduce a pertinent ψ-multiplication * ψ of functions as specified below. For k = n x n * ψ x k = x k * ψ x n as well as x n * ψ x k = x n+k -in general i.e. for arbitrary admissible ψ; compare this with (x + ψ a) n = (x + ψ a) n−1 (x + ψ a). In order to facilitate in the future formulation of observations accounted for on the basis of ψ-calculus representation of GHW algebra we shall use what follows. Definition 2.10. With Notation 2.2 adopted let us define the * ψ powers of x according to Note that x n * ψ * ψ x k * ψ = n! n ψ ! x (n+k) * ψ = x k * ψ * ψ x n * ψ = k! k ψ ! x (n+k) * ψ for k = n and x 0 * ψ = 1. This noncommutative ψ-product * ψ is deviced so as to ensure the following observations. which is the unique solution (up to a constant factor) of the ∂ ψ -difference equations systems As announced -the rules of ψ -product * ψ are accounted for on the basis of ψ-calculus representation of GHW algebra. Indeed,it is enough to consult Observation 2.5 and to introduce ψ-Pincherle derivation∂ ψ of series in powers of the symbolx ψ as below. Then the correspondence between generic relative formulas turns out evident. where f is a formal series in powers ofx ψ or equivalently in * ψ powers of x. As an example of application note how the solution of 2.7 is obtained from the obvious solution p m (x ψ ) of the∂ ψ -Pincherle differential equation 2.8 formulated within G-H-W algebra generated by {∂ ψ ,x ψ , id} 3 The general picture of the algebra End(P ) from GHW algebra point of view The general picture from the title above relates to the general picture of the algebra End(P ) of operators on P as in the following we shall consider the algebra P of polynomials P = F[x] over the field F of characteristic zero. With series of Propositions from [1,3,35,21] we shall draw an over view picture of the situation distinguished by possibility to develop further umbral calculus in its operator form for any polynomial sequences {p n } ∞ 0 [12] instead of those of traditional binomial type only. In 1901 it was proved [20] that every linear operator mapping P into P may be represented as infinite series in operatorsx and D. In 1986 the authors of [21] supplied the explicit expression for such series in most general case of polynomials in one variable ( for many variables see: [22] ). Thus according to Proposition 1 from [21] one has: Proposition 3.1. Let Q be a linear operator that reduces by one the degree of each polynomial. Let {q n (x)} n≥0 be an arbitrary sequence of polynomials in the operatorx. ThenT = n≥0 q n (x)Q n defines a linear operator that maps polynomials into polynomials. Conversely, ifT is linear operator that maps polynomials into polynomials then there exists a unique expansion of the form It is also a rather matter of an easy exercise to prove the Proposition 2 from [21]: QΦ (x; λ) = λΦ (x; λ) . To be complete let us still introduce [3,4] an important operatorx Q(∂ ψ ) dual to Q (∂ ψ ). It is now obvious that the following holds. of dual operators is expected to play a role in the description of quantum-like processes apart from the q-case now vastly exploited [3,4]. Naturally the Proposition 3.2 for Q (∂ ψ ) andx Q(∂ ψ ) dual operators is also valid. Summing up: we have the following picture for End(P ) -the algebra of all linear operators acting on the algebra P of polynomials. and of course Q(P ) = End(P ) where the subfamily Q(P ) (with zero map) breaks up into sum of subalgebras Q according to commutativity of these generalized difference-tial operators Q (see Definition 2.4 and Observation 2.2). Also to each subalgebra ψ i.e. to each Q (∂ ψ ) operator there corresponds its dual operator operators are sufficient to build up the whole algebra End(P ) according to unique representation given by (3.1) including the ∂ ψ andx ψ case. Summarising: for any admissible ψ we have the following general statement. General statement: i.e. the algebra End(P ) is generated by any dual pair {Q ,x Q } including any dual pair {Q (∂ ψ ) ,x Q(∂ ψ ) } or specifically by {∂ ψ ,x ψ } which in turn is determined by a choice of any admissible sequence ψ. As a matter of fact and in another words: we have bijective correspondences between different commutation classes of ∂ ψ -shift invariant operators from End(P ), different abelian subalgebras ψ , distinct ψ-representations of GHW algebra, different ψ-representations of the reduced incidence algebra R(L(S)) -isomorphic to the algebra Φ ψ of ψ-exponential formal power series [3] and finally -distinct ψumbral calculi [8,12,15,24,34,3,35]. These bijective correspondences may be naturally extended to encompass also Q-umbral calculi [12,1], Q-representations of GHW algebra [1] and abelian subalgebras Q . (Recall: R(L(S)) is the reduced incidence algebra of L(S) where L(S)={A; A⊂S; |A| < ∞}; S is countable and (L(S); ⊆) is partially ordered set ordered by inclusion [11,3] ). This is the way the Rota's devise has been carried into effect. The devise "much is the iteration of the few" [11] -much of the properties of literally all polynomial sequences -as well as GHW algebra representations -is the application of few basic principles of the ψ-umbral difference operator calculus [3,35,1]. ψ− Integration Remark : Recall also that there corresponds to the "∂ q difference-ization" the q-integration [25,26,27] which is a right inverse operation to "q-difference-ization" [35,1]. Namely i.e. Naturally (3.5) might serve to define a right inverse operation to "q-differenceization" and consequently the "q-integration " as represented by (3.2) and (3.3). As it is well known the definite q-integral is an numerical approximation of the definite integral obtained in the q → 1 limit. Following the q-case example we introduce now an R-integration (consult Remark 2.1). Let us then finally introduce the analogous representation for ∂ ψ difference-ization ∂ ψ =n ψ ∂ o ;n ψ x n−1 = n ψ x n−1 ; n ≥ 1. (3.8) Then ψ x n = x 1 n ψ x n = 1 (n + 1) ψ x n+1 ; n ≥ 0 (3.9) and of course Closing Remark: The picture that emerges discloses the fact that any ψ-representation of finite operator calculus or equivalently -any ψ-representation of GHW algebra makes up an example of the algebraization of the analysis -naturally when constrained to the algebra of polynomials. We did restricted all our considerations to the algebra P of polynomials. Therefore the distinction in-between difference and differentiation operators disappears. All linear operators on P are both difference and differentiation operators if the degree of differentiation or difference operator is unlimited. Thus the difference and differential operators and equations are treated on the same footing. For new applications -due to the first author see [4,1,[36][37][38][39][40][41]. Our goal here was to deliver the general scheme of "ψ-umbral" algebraization of the analysis of general differential operators [12]. Most of the general features presented here are known to be pertinent to the Q representation of finite operator calculus (Viskov, Markowsky, Roman) where Q is any linear operator lowering degree of any polynomial by one . So it is most general example of the algebraization of the analysis for general differential operators [12]. Glossary In order to facilitate the reader a simultaneous access to quoted references of classic Masters of umbral calculus -here now follow short indicatory glossaries of notation used by Ward [2], Viskov [7,8], Markowsky [11], Roman [28]- [32] on one side and the Rota-oriented notation on the other side. See also [33]. Ward Rota -oriented (this note) [n, r] x n−r y r (x + ψ y) n = n k=0 n k ψ x k y n−k Ward Rota -oriented (this note) basic displacement symbol generalized shift operator Roman Rota -oriented (this note) evaluation functional generalized shift operator formal power series f (t) formal power series Q(∂ ψ ) for (g(t), f (t)) of Q(∂ ψ ) and S ∂ ψ Roman Rota -oriented (this note) The expansion theorem: The First Expansion Theorem The Sheffer Identity: The Sheffer ψ-Binomial Theorem: s n (x + y) = n k=0 n k p n (y)s n−k (x) s n (x + ψ y) = k≥0 n k ψ s k (x)q n−k (y)
2004-12-12T18:12:08.000Z
2004-12-12T00:00:00.000
{ "year": 2005, "sha1": "cf51831da43000ff23b67e59cb6b9818c3ef89a9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5f4a1457253ded827587e5ec247ce18c595a8422", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
235125562
pes2o/s2orc
v3-fos-license
Reduced-Order-Model-Based Feedback Design for Thruster-Assisted Legged Locomotion Real-time constraint satisfaction for robots can be quite challenging due to the high computational complexity that arises when accounting for the system dynamics and environmental interactions, often requiring simplification in modelling that might not necessarily account for all performance criteria. We instead propose an optimization-free approach where reference trajectories are manipulated to satisfy constraints brought on by ground contact as well as those prescribed for states and inputs. Unintended changes to trajectories especially ones optimized to produce periodic gaits can adversely affect gait stability, however we will show our approach can still guarantee stability of a gait by employing the use of coaxial thrusters that are unique to our robot. INTRODUCTION We have categorized constraint satisfaction in legged robots in three broad categories. Namely: i) trajectory optimization (TO), ii) optimization-based controls and iii) reference trajectory manipulation. i) The goal of the TO problem is to generate optimal trajectory which satisfy constraints on states, inputs and ground reaction forces (GRF) while ensuring that the trajectories lead to stable walking gaits. TO problems for legged robot are difficult to solve due to their nonlinear dynamics, high degrees of freedom and the hybrid nature of the system brought on by ground impact [1,2,3,4,5]. Previous works such as [6], [7], [8] have proposed methods to transcribe this as a non-linear programming (NLP) problem through direct collocation methods where polynomial splines are used to approximate the continuous dynamics and thus reducing computational complexity without needing to account for the actual dynamics. While [9] has instead proposed utilizing multiple shooting method to break the original problem down into smaller steps without approximations. In both cases, however, the dynamics of the system needs to be considered along with contact dynamics to generate the trajectories. To alleviate the need of explicitly defining ground contact dynamics [10], [11] have employed null space projection methods, whereas zero acceleration constraint are enforced on feet ends in [7]. The issue still remains that these methods are extremely computationally expensive and cannot be implemented in real time, taking a few minutes to solve the TO problem. For works that use reduced order models such as the centroidal dynamics or utilize zero moment point (ZMP) based methods as in [12] and [13], experimental results of online-optimization are available however they are restricted to pseudo-static gaits rather than dynamic ones. ii) In optimization-based control schemes, the goal is to compute constraint-aware feedback stabilizing control loops. This is most commonly achieved through a predictive framework, usually by creating a linearized model over finite time horizon. For instance in [14], [15], [16] reduced-order model around the center of mass are used in a hierarchical framework to generate reference acceleration for the lower level controller to track. The downfall of these options are the need to linearize and/or simplify the underlying dynamics of the robot in order to make it feasible in real-time, and as a result not all constraints can be accounted for. In [17], [11], [18], [19] the desired control inputs are computed taking into account the full dynamics, and then optimization is carried out on a simplified least square or QP problem for tractability. iii) A different approach is to remove optimization from the control strategy and instead modify the reference trajectories to obey desired constraints. This idea was popularized through reference governors (RG) [20], where an efficient online optimization method is employed. The core of the idea being that the reference trajectory that the controller must follow can be manipulated while keeping it close to the original trajectory in the event that boundaries created by constraints are to be violated. Since its inception this idea has spawned many variations [21], [22] including an optimization free approach known as explicit reference governor (ERG) [23]. Besides the possibility of utilizing an optimization-free form, the other major advantage with RG is that it acts as a add-on scheme to an existing controller without the need of any modification on the control scheme. While gait re-design in a reactive fashion is widely used in quasi-static walkers, one major reason that within-strides gait adjustment is less common in dynamic walkers is that their small support polygons, which can be as small as a point contact, leave small to no stability margins to avoid fall-overs. With this observation, we aim to apply a new control action, i.e., thrusters, during each gait cycle. We will employ thruster actions during a small part of a gait cycle to secure hybrid invariance. In this paper we extend our previous works [24,25] by offering a systematic method, and modify the RG formulation to work with a popular framework known as hybrid zero dynamics (HZD) used to create stable gaits in bipeds. OVERVIEW OF CONTROL DESIGN IDEA In this section, we outline an overview of our approach to satisfy state, control and GRF constraints during gait cycles by deforming the Zero Dynamics Manifolds describing the gaits which will be described in brevity here. The governing equations of motion are derived using the methods of Lagrange by taking the kinetic and potential energies of the system. The resulting equations of motion are expressed in vector form as following where q s is the configuration variable vector, D s (q s ) is the symmetric inertial matrix and is only dependent on q s , the H s (q s ,q s ) matrix contains Coriolis and gravity terms, and the control matrix B s (q s ) maps the inputs to the generalized coordinate accelerationsq s . Consequently, the full model can be written in state-space form aṡ where the state vector is denoted by x s = [q s ,q s ] . The notion of directional derivatives L f y = ∂y ∂x f and holonomic constraints y = h s (x s ) = h s (q s ) will be adopted in a similar way as mainstream publications in this field. The holonomic constraints (y) are widely known as Virtual Constraints (VCs) since they are enforced by closed-loop feedback. Based on these constraints celebrated control invariant sets of the form Since there is only one degree of under-actuation (DOU) in our model, therefore, h −1 s (0) takes the form of a closed curve. We consider the following parametric descriptions q 1 = r 1 (q n ), . . . , q n−1 = r n−1 (q n ) where q n in our planar model is the last entry of q s . As a result, the output function takes the following form: Note that r = [r 1 , . . . , r n−1 ] and the matrix H ∈ R (n−1)×n can take a trivial form if each joint independently is derived with a single actuator only, here q Act denotes the actuated variables. In a nutshell, the idea is to continuously deform q s = h −1 s (0) such that the following conditions are satisfied. First, we want h −1 s (0) to remain continuous and closed curves, i.e., r(q n (t)) = r(q n (t + T )) where T is the gait period. Second, we want Γ remain stabilizable at all times otherwise enforcing VCs will be impossible. Last, we want gait feasibility conditions including equality C eq (q n ,q n ) = 0 and inequality constraints 0 ≤ C ineq (q n ,q n ) to be satisfied. This problem can take the following constrained ordinary differential equation form: where, the first line governs restriction dynamics and the second line is the condition for the stabilizability of Γ. Widely considered gait feasibility constraints such as q n (t + T ) = q n (t),q n (t + T ) = q n (t), |x s | < x max , |u| < u max , 0 < F N and | Fτ F N | < µ can potentially form the equality and inequality constraints, where F τ , F N are the tangential and normal GRF respectively. This problem can be looked upon as a classical time-invariant, trajectory tracking problem, i.e., q Act parameterized in terms of q n . It can be easily resolved using optimization. In our approach, to solve the problem in an optimization-free fashion (or minimally use optimization) positive invariance property of Γ -i.e., being able to find the control input u such that particularly when q s (0) is on q s = h −1 s (0),q s (t) remains tangent to q s = h −1 s (0) yielding the solutions q s (t) remain on Γ for all t > 0 -plays a key role and is closely dependent on how q s = h −1 s (0) is deformed. In the case that this property is guaranteed, the constraint satisfaction problem can be transformed into a motion planning problem in the state space of the internal dynamics which can be conveniently tackled using simple path planning tools. This particularly becomes important and handy when fast and reactive gait planing is in need in dynamic walkers. We will further elaborate this concept with a simple example. VALID DEFORMATIONS OF Consider the state-space representation of the system dynamics given by (2). Here, we will explore our options and choices in order to manipulate (3). Our possible options are: i) scaling, i.e., w i q n , and ii) shifting, i.e., q n + w i , of the equilibrium point of the error function y(q s ). Evidently, constant priming terms yield a discrete collections of parameterized systems, i.e., x s = f s (x s ) + g s (x s )u ω , which is not desired here. Considering the numerical parameters ω, which are used to parametereize r(q n , ω), as auxiliary control input in discrete maps (e.g., Poincare maps) can only provide discontinuous means of priming Γ ω at the boundaries. The continuous manipulation of these parameters can violate the transversality condition. Consider y = q Act − r(q n , ω). For a fixed q n , it is possible to show that can vanish on Γ ω . Here, B * s (q n ) = [0 1×(n−1) , 1] and the annihilation of (5) implies g s (x s ) and ∂hs(xs) ∂xs are orthogonal. On the other hand, it is straightforward to show thatẏ =q Act − r (q n )q n + ω(t) yields relative degree {2, . . . , 2} on Γ ω . It is also noticeable that this choice of manipulating y has no effects on null ∂hs ∂qs = r (q n ) , 1 which means that at least the primer has no influence over the transversality condition as long as y = q Act − r(q n ) is itself is valid VC. Notice that the role of the primer ω(t) in this form is comparable to the role of a disturbance term in the system given below where γ 1 contains the feed-forward terms [26], . As a result, the closed-loop system has to possess strong disturbance rejection properties. The roles of this disturbance term will be beneficial for us though as it will adjust the equilibria of h s (x s ) and L fs h s (x s ) under stabilising controllers with adjustable (and measurable) basins of attraction to successfully satisfy state, control and GRF constraints. To do this, we need to design an update law for the primer variable vector ω(t) such that the finite-time convergence of the solutions to Γ ω in the closed-loop system is unaffected. As far as the design of u is of concern, any nonlinear controller (or linear controllers if the nonlinear terms are bounded and the bounds are known) can satisfy the VCs. We will limit ourselves to the following modest feedback law u = −L g L f h s (x s )(L 2 f h s (x s ) + K p y + K dẏ ) where K i ∈ R (n−1)×(n−1) are constant matrices and instead will remain focused on deforming Γ ω in order to satisfy our constraints. Hence, we will assume stabilising supervisory controllers that guarantee the enforcement of the virtual constraints, however, their disturbance rejection properties has to be carefully considered. The control law given above can generate GAS at the equilibrium point of the system given by (6) when the primer variable vector is time-invariant, i.e., ω(t) = ω. Subsequently, Γ ω takes the following form where the equilibrium points for h s (x s ) and L fs h s (x s ) under ω(t) are obtained by solving Of course, realizing GAS property under ω(t), i.e., when the primer variable is time-varying will not be achieved in a trivial fashion and requires an extra condition to be satisfied. We will discuss this in the proof of GAS property of the closed-loop system later. CONSTRAINTS DERIVATION Consider the configuration variable vector q s = q Act , p 1 , q n where q Act ∈ R m is the actuated joint angle, m is the number of actuated joints and p 1 ∈ R 2 is the stance leg contact point. The control matrix B s (q s ) for u = [u Act , u T hrust , F ] in the Euler-Lagrange equations can take the following form where p T hrust is the physical location of the thruster action u T hrust . Notice that based on how the thruster actions look like the underactuated coordinate q n can be actuated. The following restriction dynamics can be obtained at every point on Γ ω In this equation β 2 = B * s G s (q n ), where G s (q n ) contains terms affected by gravity, and β 1 is given by where Q i (q n ) are the Christoffel Symbols. A similar algebraic relationship for the constraints [u Act , F ] can be obtained which is skipped here. Next, we will steer y andẏ using the primer variable ω(t) in (6) in order to make sure the solutions of (10) stay within the constraint-admissible space. To do this, consider the y-ẏ space. Since we assumed a pre-stabilized system -in fact all of the above derivations only make sense if q Act = r(q n ) + τ 0 ω(τ )dτ andq Act = r (q n )q n + ω(t) -it is reasonable to evaluate the constraints c l ≤ [u Act , F ] ≤ c u (c l and c u are constraint lower and upper bounds) based on the steady-state solutions, i.e., y ω andẏ ω , and ignore the transient solutions. Other than simplifying the nonlinear constraint satisfaction problem given in (4), considering [y ω ,ẏ ω ] has another interesting result which will be explained below. Consider the set which is the locus of all of the steady-state solutions of the system (6). It is possible to show that invariant sets around any point [y ω ,ẏ ω ] in the set defined by Y ω can be created. Figure 2: Illustration of the states constraint satisfaction. The primer update law was applied to adjust the joints reference trajectories to satisfy the constraints. This can be seen in the pendulum angle and virtual leg length, i.e., the low-order representation of the Harpy system, at 0.5 s. The algorithm prevents the pendulum angle from dropping to below 5 • as specified by the constraints. Fig. 1 shows the low-order representations of our legged robot with two thrusters attached to its torso. This section discusses the simulation setup and results where the proposed framework is implemented on an equivalent low-order model (e.g., variable-length inverted pendulum model) of the full system. In this way, the low-order model is considered as a template used to satisfy the GRF, state, etc., constraints and the full system's joint angles and velocities are resolved through the forward kinematics equations. The simulation is done to show that the constraints, which are highly nonlinear and are often resolved using costly optimizers, can be enforced in a completely optimization-free fashion. The simulation has been resolved using a 4-th order Runge-Kutta scheme. The primer algorithm explained above was used to perform the trajectory manipulation task on the state trajectories in such a way that the constraints are satisfied. The following states in the low-order system were considered: the pendulum angle from the ground plane normal vector, heading angle, and the length of the virtual leg. Then, the following four constraints were stacked inside the nonlinear vector function C ineq in the inequality constraint equation (i.e., C ineq ≥ 0). The constraints include: a minimum pendulum angle of five degrees, the ground friction cone constraints in the x and y directions in the 3D model, and a minimum ground normal forces of 20 N. The latter constraint guarantees that the stance leg-end always stays on the ground surface. The simulation results are illustrated in Figs. 2 to 4. Figure 2 displays the state trajectories for the loworder model. As it can be seen in this figure, a major deviation from the target reference at t = 0.5 s is required in order to avoid constraint violation. Fig. 3 shows the GRF profiles obtained using the target references and compares them against the primed references. While the target references frequently violate the no-slip constraints, the primed trajectories operate within the permissible bounds (this can be easily verified in Fig. 3). The constraints behavior under the proposed priming framework is summarized in Fig. 4. This figure shows that the target reference lead to many violations while the primed references do not violate the constraints. However, in a few occasions, as it can be seen in Fig. 4, the primed references violate the constraints. Based on our assumption outlined above in Section 4, this temporary violations are expected. And, as it is evidently seen in, the trajectories are attracted to the constraint admissible sets after constraint violations occur.
2021-05-24T01:15:40.029Z
2021-05-21T00:00:00.000
{ "year": 2021, "sha1": "b4d0d723a0d87799f5aacd66334850fa3058ba45", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b4d0d723a0d87799f5aacd66334850fa3058ba45", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
119304265
pes2o/s2orc
v3-fos-license
Cosmological Constraints from a Combination of Galaxy Clustering and Lensing -- I. Theoretical Framework We present a new method that simultaneously solves for cosmology and galaxy bias on non-linear scales. The method uses the halo model to analytically describe the (non-linear) matter distribution, and the conditional luminosity function (CLF) to specify the halo occupation statistics. For a given choice of cosmological parameters, this model can be used to predict the galaxy luminosity function, as well as the two-point correlation functions of galaxies, and the galaxy-galaxy lensing signal, both as function of scale and luminosity. In this paper, the first in a series, we present the detailed, analytical model, which we test against mock galaxy redshift surveys constructed from high-resolution numerical $N$-body simulations. We demonstrate that our model, which includes scale-dependence of the halo bias and a proper treatment of halo exclusion, reproduces the 3-dimensional galaxy-galaxy correlation and the galaxy-matter cross-correlation (which can be projected to predict the observables) with an accuracy better than 10 (in most cases 5) percent. Ignoring either of these effects, as is often done, results in systematic errors that easily exceed 40 percent on scales of $\sim 1 h^{-1}\Mpc$, where the data is typically most accurate. Finally, since the projected correlation functions of galaxies are never obtained by integrating the redshift space correlation function along the line-of-sight out to infinity, simply because the data only cover a finite volume, they are still affected by residual redshift space distortions (RRSDs). Ignoring these, as done in numerous studies in the past, results in systematic errors that easily exceed 20 perent on large scales ($r_\rmp \gta 10 h^{-1}\Mpc$). We show that it is fairly straightforward to correct for these RRSDs, to an accuracy better than $\sim 2$ percent, using a mildly modified version of the linear Kaiser formalism. ious parameter degeneracies inherent in the CMB data † , (ii) constraining certain cosmological parameters that are largely unconstrained by the CMB, such as evolution in the equation of state of dark energy, and (iii) for establishing a true concordance cosmology, i.e., a cosmological model that is in agreement with all possible data sets. With the advent of ever larger and more homogeneous galaxy redshift surveys, such as the Las Campanas Redshift Survey (LCRS; Shectman et al. 1996), the PSCz (Saunders et al. 2000), the two-Degree Field Galaxy Redshift Survey (2dFGRS; Colless et al. 2003) and the Sloan Digital Sky Survey (SDSS; York et al. 2000), there has been a steady improvement in the tightness and reliability of the corresponding cosmological constraints. Most of these studies focus on using galaxy clustering on large scales where one can rely on linear theory. Prime examples are constraints from (baryon acoustic oscillations in) the galaxy power spectrum (Percival et al. 2001;Cole et al. 2005;Eisenstein et al. 2005;Tegmark et al. 2006;Hütsi 2006;Percival et al. 2007a,b,c;Padmanabhan et al. 2007;Gaztanaga, Cabré & Hui 2009;Percival et al. 2010;Blake et al. 2011;Anderson et al. 2012). However, recently it has also become feasible to accurately model galaxy clustering on small, non-linear scales using the halo model approach combined with halo occupation statistics. The halo model postulates that all dark matter is partitioned over dark matter haloes, and describes the dark matter density distribution in terms of the halo building blocks (e.g., Neyman & Scott 1952;Seljak 2000;Ma & Fry 2000;Scoccimarro et al. 2001;Cooray & Sheth 2002). When combined with a model that describes how galaxies with certain properties are distributed over dark matter haloes of different mass, this can be used to make predictions for the clustering properties of galaxies on all scales that are observationally accessible (e.g., Jing, Mo & Börner 1998;Cooray & Sheth 2002;Yang, Mo & van den Bosch 2003). This approach has been used extensively in recent years to constrain the galaxy-dark matter connection, i.e., the connection between galaxy properties and halo mass, which holds important information regarding galaxy formation. On large, linear scales, the two-point correlation function between haloes of mass M can be written as ξ hh (r|M ) = b 2 h (M ) ξ lin mm (r), with ξ lin mm (r) the two-point correlation function of the linear matter distribution and b h (M ) the linear halo bias (e.g., Mo & White 1996). Similarly, for galaxies of a given luminosity, one has that ξgg(r|L) = b 2 g (L) ξ lin mm (r), with bg(L) the bias of galaxies of luminosity L. Hence, one can use ξgg(r|L) to infer the average mass of haloes that host galaxies of luminosity L by simply finding the M for which b h (M ) = ξgg(r|L)/ξ lin mm (r) 1/2 . By comparing the observed abundance of galaxies of luminosity L to the predicted abundance of haloes of mass M , one subsequently infers the average number of galaxies per halo. Hence, measurements of ξgg(r|L) can be used to constrain halo occupation statistics, and this technique has been widely used (Jing et al. 1998Peacock & Smith 2000;Bullock, Wechsler & Somerville 2002;Magliocchetti & Porciani 2003; † for instance, the CMB as measured by WMAP is consistent with a closed Universe with Hubble parameter h = 0.3 and no cosmological constant (e.g. Spergel et al. 2007) Yang et al. 2003Yang et al. , 2004van den Bosch et al. 2003avan den Bosch et al. , 2007Porciani, Magliocchetti & Norberg 2004;Wang et al. 2004;Zehavi et al. 2004Zehavi et al. , 2005Zheng 2004; Abazajian et al. 2005;Collister & Lahav 2005;Tinker et al. 2005; Lee et al. 2006). Note, though, that this method requires knowledge of both b h (M ) and ξ lin mm (r), both of which are strongly cosmology dependent. Consequently, the resulting halo occupation statistics are also cosmology dependent (see e.g., Zheng et al. 2002;van den Bosch et al. 2007;Cacciato et al. 2009). Although this makes it difficult to calibrate galaxy formation models using halo occupation statistics (e.g., Berlind et al. 2003), it also implies that one can use this method to constrain cosmological parameters as long as one has some independent constraints on halo occupation statistics. Various approaches to constrain cosmological parameters along these lines have been used in recent years. Abazajian et al. (2005) have shown that the degeneracy between occupation statistics and cosmology can (at least partially) be broken by using the correlation function itself, as long as one includes data on sufficiently small scales (i.e., the onehalo term). Using the projected correlation functions measured from the SDSS and allowing the cosmological parameters to vary within constraints imposed by various CMB experiments, they were able to obtain constraints that were significantly tighter than those from the CMB alone, with Ωm = 0.26 ± 0.03 and σ8 = 0.83 ± 0.04. Zheng et al. (2002) suggested that one can break the degeneracy between halo occupation model and cosmology by using the peculiar velocities of galaxies as inferred from the redshift space distortions in the two-point correlation function. This idea was used by Yang et al. (2004), who concluded that the power-spectrum normalization, σ8, needs to be of the order of ∼ 0.75 (assuming Ωm = 0.3), significantly lower than the value then advocated by WMAP. Very similar results were obtained by van den Bosch et al. (2007) and by Tinker et al. (2007). The latter used a much more sophisticated treatment of redshift space distortions developed by and Tinker (2007). An alternative approach for breaking the degeneracy between halo occupation model and cosmology is to use constraints on the (average) mass-to-light ratios of dark matter haloes. This method was first used by van den Bosch et al. (2003b) and Tinker et al. (2005), who were able to obtain relatively tight constraints on Ωm and σ8 from combinations of clustering data plus constraints on the mass-to-light ratios of clusters. Interestingly, both studies again found evidence for a relatively low value of the power spectrum normalization: σ8 ≃ 0.75 for Ωm = 0.25. Along similar lines, one can also use a combination of clustering and galaxy-galaxy lensing. The latter effectively probes the galaxy-dark matter cross correlation, and therefore holds information regarding the mass-to-light ratios of dark matter haloes covering a wide range in halo mass. Since its first detection by Brainerd, Blandford & Smail (1996), the accuracy of galaxy-galaxy lensing measurements has increased to the extent of yielding high signal-to-noise ratio measurements over a significant dynamic range in galaxy luminosity and/or stellar mass (e.g., Fisher et al. 2000;Hoekstra et al. 2002;Sheldon et al. 2004Sheldon et al. , 2009Mandelbaum et al. 2006Mandelbaum et al. , 2009Leauthaud et al. 2007). Similar to the galaxy-galaxy autocorrelation function, the galaxy-matter cross correlation function can be accurately modeled using the halo model (Guzik & Seljak 2001, 2002Mandelbaum et al. 2005;Yoo et al. 2006;Cacciato et al. 2009;Leauthaud et al. 2011Leauthaud et al. , 2012van Uitert et al. 2011). Hence, the combination of galaxy clustering and galaxy-galaxy lensing is ideally suited to constrain cosmological parameters, as demonstrated in detail by Yoo et al. (2006). A first application of this idea by Seljak et al. (2005), using the model of Guzik & Seljak (2002) and the galaxy-galaxy lensing data of Mandelbaum et al. (2006), combined with WMAP constraints, yielded σ8 = 0.88±0.06, only marginally consistent with the values obtained from the cluster mass-to-light ratios and/or the redshift space distortions mentioned above. However, more recently, two different analyses based on the same galaxy-galaxy lensing data by Cacciato et al. (2009) and Li et al. (2009) both argued that a flat ΛCDM cosmology with (Ωm, σ8) = (0.238, 0.734) is in much better agreement with the data than a (0.3, 0.9) model. Although the reason for the disagreement between these studies and that of Seljak et al. (2005) is probably related to the different modelling approaches, these studies all have demonstrated that a combination of clustering and lensing data holds great potential for constraining cosmological parameters. This is the first paper in a series in which we use a combination of galaxy clustering and galaxy-galaxy lensing data to constrain cosmological parameters. In this paper we present the theoretical framework and test the accuracy of our method using mock data. In More et al. 2012a (hereafter Paper II) we present a Fisher matrix analysis to identify parameter-degeneracies and to assess the accuracy with which various cosmological parameters can be constrained using the methodology presented here. Finally, in Cacciato et al. 2012b (hereafter Paper III) we apply our analysis to the actual SDSS data to constrain cosmological parameters (in particular Ωm and σ8) under the assumption of a 'standard' flat ΛCDM cosmology. Throughout this paper, unless specifically stated otherwise, all radii and densities will be in comoving units, and log is used to refer to the 10-based logarithm. Quantities that depend on the Hubble parameter will be written in units of h = H0/(100 km s −1 Mpc −1 ). MODEL DESCRIPTION Our main goal is to use galaxy clustering and galaxy-galaxy lensing, measured as function of luminosity from the main galaxy sample in the SDSS, to simultaneously constrain cosmology and halo occupation statistics. As detailed in papers II and III, the data that we will use consists of (i) the galaxy luminosity function, Φ(L, z), at the median redshift of the SDSS main galaxy sample (z ≃ 0.1), (ii) the projected twopoint correlation functions, wp(rp|L1, L2, z), for galaxies in six luminosity bins, [L1, L2], each with its own median redshift z, and (iii) the corresponding excess surface densities (ESD), ∆Σ(R|L1, L2, z). In this section, we present analytical expressions for wp(rp|L1, L2, z), ∆Σ(R|L1, L2, z) and Φ(L, z). For completeness and clarity we present a detailed, step-by-step derivation of our method, and we will emphasize where it differs from that of previous authors. The backbone of our model is the halo model, in which the matter distribution in the Universe is described in terms of its halo building blocks (see Cooray & Sheth 2002 and Mo, van den Bosch & White 2010 for comprehensive reviews). After a detailed description of how the halo model can be used to compute the power spectrum of the dark matter mass distribution ( §2.1), we show how the halo model can be complemented with a model for halo occupation statistics which allows one to compute wp(rp|L1, L2, z), ∆Σ(R|L1, L2, z) and Φ(L, z) for a given cosmology. In order to keep the derivations concise, in what follows we will not explicitly write down the dependencies on L1 and L2. The halo model Throughout this paper we define dark matter haloes as spherical overdensity regions with a radius, r200, inside of which the average density is 200 times the average density of the Universe. Hence, the mass of a halo is Under the assumption that all dark matter is bound in virialized dark matter haloes, the density perturbation field at redshift z, defined by can be written in terms of the spatial distribution of dark matter haloes and their internal density profiles. Throughout we assume that dark matter haloes are spherically symmetric and have a density profile, ρ h (r|M, z) = M u h (r|M, z), that depends only on mass, M , and redshift, z. Note that u h (x|M, z) d 3 x = 1. Now imagine that space (at some redshift z) is divided into many small volumes, ∆Vi, which are so small that none of them contains more than one halo center. The occupation number of haloes in the i th volume, N h,i , is therefore either 0 or 1, and so N h,i = N 2 h,i = N 3 h,i .... In terms of these occupation numbers the density field of the (dark) matter can formally be written as where Mi is the mass of the halo whose center is in ∆Vi. Using that the ensemble average where the last equality follows from the normalization of u h (x|M, z) and from the halo model ansatz that all dark matter is partitioned over dark matter haloes. Similar to δm we can also define the halo density contrast δ h . Ignoring possible stochasticity in the relation between δm and δ h , we can use a Taylor series expansion to write (Fry & Gaztanaga 1993;Mo, Jing & White 1997), where b h,n is called the halo bias factor of order n. Although the requirement that δ h = 0 implies that b h,0 = − ∞ n=2 b h,n δ n m /n!, which in general is not zero, one can ignore b h,0 since in Fourier space it only contributes to the galaxy power spectrum for wavevector k = 0. Furthermore, on large scales we have that |δm| ≪ 1, so that we can also neglect the higherorder (n > 1) bias factors. Hence, on large scales the cross correlation function of haloes of mass M1 and haloes of mass M2 can be written as where ξ lin mm (r, z) is the two-point correlation function of the initial density perturbation field, linearly extrapolated to redshift z, and we have used b h (M, z) as shorthand notation for the linear halo bias b h,1 (M, z). One can extend this prescription to the mildly non-linear regime, in which one can no longer ignore the higher-order bias terms, by replacing ξ lin mm (r, z) with the non-linear two-point matter correlation function, ξmm(r, z), and by including a radial dependence of the halo bias, ζ(r, z) (which effectively captures the effect of the higher-order bias parameters, see §3.4 below). Under the assumption that haloes are spherical, one then obtains that where Θ(x) is the Heaviside step function, which assures that ξ hh (r, z|M1, M2) = −1 for r < rmin in order to account for halo exclusion, i.e., the fact that dark matter haloes cannot overlap. In principle, one expects that rmin = rmin(M1, M2, z) = r200(M1, z) + r200(M2, z). However, the halo finder used by Tinker et al. (2008), whose halo mass function we use, does allow overlap of haloes in that any halo is considered a host halo as long as its center does not lie within the outer radius of another halo. Therefore, to be consistent, we follow Tinker et al. (2012) and Leauthaud et al. (2011), and adopt that rmin = MAX [r200(M1, z), r200(M2, z)]. For computational convenience, we will be working in Fourier space. To that extent we define the Fourier transform of ρm(x, z) as where V is the volume over which the Universe is assumed to be periodic, and is the Fourier transform of the normalized halo density profile. With our definition of the Fourier transform, the (nonlinear) matter-matter power spectrum is defined as where is the Dirac delta function, ρ * indicates the complex conjugate of ρ, and we have used thatρm(0) =ρm. Using Eq. (12) we have that which we split in two terms: the one-halo term, for which j = i, and the two-halo term with j = i. The former can be written as where we have used that N 2 h,i = N h,i . For the 2-halo term we use the fact that we are free to choose ∆Vi arbitrary small, so that Here we have accounted for the fact that dark matter haloes are clustered, as described by the two-point halo-halo correlation function ξ hh (r, z|M1, M2). Hence, using Eq. (11), which properly accounts for halo exclusion, we have that Here we have used thatũ * (k|M, z) =ũ(k|M, z), which follows from the fact that u(x|M, z) is real and even, and we have defined with k = |k| and with ξ hh (r, z|M1, M2) given by Eq. (11). Combining Eqs. (14)-(19), and using that haloes are defined to be spherically symmetric, we finally obtain that where and Our treatment of halo exclusion is similar to that of Smith, Scoccimarro & Sheth (2007) and Smith, Desjacques & Marian (2011), except that we have included the (semiempirical) factor ζ(r, z) to account for the radial dependence of halo bias. As shown in Smith et al. (2011), Eq. (23) has the correct asymptotic behavior at both large and small scales. This is an important improvement over a number of approximate methods that have been advocated and which typically involve adopting an upper limit for the mass interval used in the integral for the 2-halo term of the power spectrum (e.g., Takada & Jain 2003;Zheng 2004;Abazajian et al. 2005;Tinker et al. 2005Tinker et al. , 2012Yoo et al. 2006;Leauthaud et al. 2011 The galaxy-galaxy correlation function If one assumes that each galaxy resides in a dark matter halo, the halo model described above can also be used to compute the galaxy-galaxy correlation function or the galaxy-matter cross correlation function. All that is needed is a statistical description of how galaxies are distributed over dark matter haloes of different mass. To that extent we use the conditional luminosity function (hereafter CLF) introduced by Yang et al. (2003). The CLF, Φ(L|M )dL, specifies the average number of galaxies with luminosities in the range L ± dL/2 that reside in a halo of mass M . Throughout we ignore a potential redshift dependence of the CLF. Since the data that we use to constrain the CLF only covers a narrow range in redshift (see Paper III), this assumption will not have a strong impact on our results. Once the CLF is specified, the galaxy luminosity function at redshift z, Φ(L, z), simply follows from integrating over the halo mass function, n(M, z); In what follows, we will always be concerned with galaxies in a specific luminosity interval [L1, L2]. The average number density of such galaxies follows from the CLF according tō where is the average number of galaxies with L1 < L < L2 that reside in a halo of mass M . For reasons that will become clear below, we split the galaxy population in centrals (defined as those galaxies that reside at the center of their host halo) and satellites (those that orbit around a central), and we split the CLF in two terms accordingly: where Φc(L|M ) and Φs(L|M ) represent central and satellite galaxies, respectively (cf., Cooray & Milosavljevic 2005). Similarly, we write the number density of galaxies, ng(x, z), as the sum of the contribution of centrals, nc(x, z), and that of satellites, ns(x, z), so that δg(x, z) ≡ ng(x, z) −ng(z) ng(z) = fc(z)δc(x, z) + fs(z)δs(x, z) . Here fc(z) =nc(z)/ng(z) is the central fraction, fs(z) = ns(z)/ng(z) = 1 − fc(z) is the satellite fraction, and δc(x, z) and δs(x, z) are the number density contrasts of centrals and satellites at redshift z, respectively. Note thatnc(z) and ns(z) simply follow from Eq. (25) by replacing Φ(L|M ) in Eq. (26) by Φc(L|M ) and Φs(L|M ), respectively. The detailed functional form that we adopt for Φ(L|M ) is discussed in §3.7. In this subsection we show how the CLF enters in the computation of the (projected) galaxy-galaxy correlation function, wp(rp|L1, L2, z), and in the excess surface density profile, ∆Σ(R|L1, L2, z). Within the framework of the halo model, we can write where Nc,i is the number of central galaxies in the halo whose center is in volume element i (i.e., Nc,i is either 0 or 1). The Dirac delta function expresses the fact that a central, by definition, resides at the center of a dark matter halo. Similarly, for the satellite galaxies we can write where Ns,i is a positive integer indicating the number of satellite galaxies that reside in the halo whose center is in volume element i, and us(r|M, z) describes the normalized radial distribution of satellite galaxies in an average halo of mass M at redshift z ‡ . Using Eq.(28), the galaxy-galaxy power spectrum can be written as while the galaxy-matter cross power spectrum is given by Using the same methodology as in §2.1 for the dark matter, we split each of these five power-spectra into a 1halo and a 2-halo term. The various 2-halo terms are given by where 'x' and 'y' are either 'c' (for central), 's' (for satellite), or 'm' (for matter), Q(k|M1, M2, z) is given by Eq. (20), and we have defined and Here Nc|M and Ns|M are the average number of central and satellite galaxies in a halo of mass M , which follow from Eq. (26) upon replacing Φ(L|M ) by Φc(L|M ) and Φs(L|M ), respectively. For the 1-halo terms, one obtains and Here we have assumed that the occupation numbers of centrals and satellites are independent, so that NcNs|M = Nc|M Ns|M , and we have introduced the parameter If the occupation number of satellites follows a Poisson distribution, i.e., with λ = Ns|M , then AP = 1, while values of AP larger (smaller) than unity indicate super-(sub-) Poisson statistics. The Projected Correlation Function and Excess Surface Density Once Pgg(k, z) and Pgm(k, z) have been determined, it is fairly straightforward to compute the projected galaxygalaxy correlation function, wp(rp, z), and the excess surface density (ESD) profile, ∆Σ(R, z). We start by Fourier transforming the power-spectra to obtain the two-point correlation functions: where 'x' and 'y' are as defined above. As discussed above, the excess surface density profile where Σ(< R, z) is given by Eq. (4). The projected surface density, Σ(R, z), is related to the galaxy-matter cross correlation, ξgm(r, z), according to where the integral is along the line of sight with ω the comoving distance from the observer. The three-dimensional comoving distance r is related to ω through r 2 = ω 2 L + ω 2 − 2ωLω cos θ. Here ωL is the comoving distance to the lens, and θ is the angular separation between lens and source (see Fig. 1 in Cacciato et al. 2009). Since ξgm(r, z) goes to zero in the limit r → ∞, and since in practice θ is small, we can approximate Σ(R, z) using Eq. (3), which is the expression we adopt throughout. The projected galaxy-galaxy correlation function is defined as Here rp is the projected separation between two galaxies, rπ is the redshift-space separation along the line-of-sight, and ξgg(rp, rπ, z) is the measured two-dimensional correlation function, which is anisotropic due to the presence of peculiar velocities. In the limit rmax → ∞, the projected correlation function (45) is completely independent of these peculiar velocities, simply because they have been integrated out. In that case, wp(rp) can be written as a simple Abel transform of the real-space correlation function: (Davis & Peebles 1983). However, since real data sets are always limited in extent, in practice the projected correlation function wp(rp, z) is always obtained by integrating ξgg(rp, rπ, z) out to some finite rmax rather than to infinity. For example, Zehavi et al. (2011), whose data we use in Paper III, adopt rmax = 40h −1 Mpc or 60h −1 Mpc, depending on the luminosity sample used. This finite integration range is often ignored in the modeling (e.g., Magliocchetti & Porciani 2003;Collister & Lahav 2005;Wake et al. 2008a,b) or is 'accounted' for by computing the model prediction for wp(rp, z) using Eq. (46), but integrating from rp out to rout ≡ r 2 p + r 2 max , where rmax is the same value as used for the data (e.g., Zehavi et al. 2004Zehavi et al. , 2005Zehavi et al. , 2011Abazajian et al. 2005;Tinker et al. 2005;Zheng et al. 2007Zheng et al. , 2009Yoo et al. 2009). However, as we demonstrate in §4.5 below, this introduces errors that can easily exceed 40 percent or more on the largest scales probed by the data (∼ 20h −1 Mpc; see also Padmanabhan, White & Eisenstein 2007;Norberg et al. 2009;Baldauf et al. 2010). This is due to the fact that the peculiar velocities on scales r > rmax cannot be ignored. In order to take these residual redshift space distortions into account, we make the assumption that the large scale peculiar velocities are completely dominated by linear velocities (i.e., those that derive from linear perturbation theory), and that the non-linear motions that give rise to the Finger-of-God effect have been integrated out. In that case we can correct Eq. (46) for the fact that the projected correlation function has been obtained using Eq. (45) with a finite rmax as follows: where fcorr(rp, z) is the correction factor given by Here ξ lin gg (r, z) and ξ lin gg (rp, rπ, z) are the linear two-point correlation functions of galaxies at redshift z in real space and redshift space, respectively. For the former we may write with ξ lin mm (r, z) the two-point correlation function of the initial matter field, linearly extrapolated to redshift z, and is the mean bias of the galaxies in consideration. For the linear galaxy correlation function in redshift space we can write (e.g., Kaiser 1987;Hamilton 1992). Here s = r 2 p + r 2 π is the separation between the galaxies in redshift space, µ = rπ/s is the cosine of the line-of-sight angle, P l (x) is the l th Legendre polynomial, and ξ0, ξ2, and ξ4 are given by where and with a = 1/(1 + z) the scale factor and D(z) the linear growth rate. As we demonstrate in §4.5, although this correction is fairly accurate on large scales ( > ∼ 3h −1 Mpc), on smaller scales it introduces an error of a few percent (see also Baldauf et al. 2010). Detailed tests with mocks indicate that this problem can be avoided by simply replacing the linear galaxy-galaxy correlation function in the Kaiser formalism with its non-linear analog; i.e., by replacing in Eq. (48) and Eqs. (52)-(55) each occurrence of ξ lin gg (r, z) with ξgg(r, z) computed from Eq. (42) using the model outlined in §2.2. This is the method we will use throughout whenever we compute wp(rp, z) for comparison with data, always using the same rmax as used for the data (see Paper III) and withb(z) computed from our CLF model using Eq. (50). Note that with this modified version of the Kaiser formalism, the denominator of fcorr in Eq. (48) is exactly equal to the integral in Eq. (47). Hence, there is no need to compute the correction factor; rather, wp(rp) can simply be obtained directly using Eq. (45) with ξgg(rp, rπ, z) given by Eqs. (51)-(55), but with ξ lin gg (r, z) replaced by ξgg(r, z) (see §4.5 for details). MODEL INGREDIENTS The model described in the previous section requires a number of ingredients, namely the halo mass function, n(M, z), the halo bias function, b h (M, z), the radial bias function, ζ(r, z), the linear and non-linear matter power spectra, P lin mm (k, z) and Pmm(k, z), respectively, the (normalized) halo density profile, u h (r|M ), the (normalized) radial number density distribution of satellite galaxies, us(r|M ), and the halo occupations statistics Nc|M and Ns|M . We now discuss these ingredients in turn. Matter Power Spectra In our fiducial model, which includes a treatment of halo exclusion, we require both the linear and the non-linear two-point correlation functions of the matter, ξ lin mm (r, z) and ξmm(r, z), which are the Fourier transform of the linear and non-linear power-spectrum, P lin mm (k, z) and Pmm(k, z), respectively. Throughout we compute Pmm(k, z) using the fitting formula of Smith et al. (2003) § which is modeled on the basis of the linear matter power spectrum, Here ns is the spectral index of the initial power spectrum, T (k) is the linear transfer function, and D(z) is the linear growth factor at redshift z, normalized to unity at z = 0. We adopt the linear transfer function of Eisenstein & Hu (1998), which properly accounts for the baryons, neglecting any contribution from neutrinos and assuming a CMB temperature of 2.725K (Mather et al. 1999). The power spectrum is normalized such that the mass variance is equal to σ 2 8 for R = 8h −1 Mpc. Here is the Fourier transform of the spatial top-hat filter, and M is related to R according to M = 4πρmR 3 /3. Halo Mass Function For the halo mass function, n(M, z), which specifies the comoving abundance of dark matter haloes of mass M at redshift z, we use the results of Tinker et al. (2008Tinker et al. ( , 2010, who have shown that the halo mass function is accurately described by where ν = δsc(z)/σ(M ), with δsc(z) the critical overdensity required for spherical collapse at z, and For our definition of halo mass (see §2.1), Tinker et al. (2010) find with b h (ν) the halo bias function of Tinker et al. (2010), specified in §3.3 below. This normalization expresses that the distribution of matter is, by definition, unbiased with respect to itself. Throughout we adopt which is a good numerical approximation to the critical threshold for spherical collapse (Navarro, Frenk & White 1997 Halo Bias Function For the halo bias function we adopt the fitting function of Tinker et al. (2010), which for our definition of halo mass, can be written as where, as before, ν = δsc(z)/σ(M ), Although we believe the halo mass function and halo bias function obtained by Tinker et al. (2008Tinker et al. ( , 2010 to be the most accurate to date, it is important to realize that they still can carry uncertainties that can potentially impact cosmological results. It is unclear if such uncertainties affect just the mass function normalization and not its shape. We will carry out a proper investigation of this issue in future work. Throughout this paper, however, we restrict ourselves to the n(M, z) and b h (M, z) specified above. Radial Bias Function An important ingredient of the halo model is the radial bias function, ζ(r, z), which accounts for the fact that Eq. (10) becomes inaccurate in the quasi-linear regime, by making halo bias scale dependent, i.e., it effectively describes the impact of the non-zero higher-order bias factors in Eq. (9). Ideally, the radial dependence of the halo bias is to be computed from first principles using, for example, (renormalized) perturbation theory (e.g., Crocce & Scoccimarro 2006;McDonald 2006McDonald ,2007Smith, Scoccimarro & Sheth 2007;Elia et al. 2011). However, it remains to be seen whether these techniques can yield reliable results in the quasi-linear regime of the 1-halo to 2-halo transition region, which will probably require an impracticable large number of orders or loops in the perturbation series. In the absence of such an analytical solution we have to resort to empirical fitting functions calibrated against numerical simulations. Throughout, we adopt the fitting function of Tinker et al. (2005), given by The subscript 0 indicates that this fitting function was calibrated using N -body simulations in which the haloes were identified using the friends-of-friends (FOF) percolation algorithm (e.g., Davis et al. 1985), with a linking length of 0.2 times the mean inter-particle separation. However, the halo mass function and halo bias function used here are based on the spherical overdensity algorithm. As already pointed out in Appendix A of Tinker et al. (2012), because of these different halo definitions, the fitting function (65) is likely to be inadequate on small scales, which we indeed find to be the case (see §4.2 below). After some trial and error, while assuring an easy numerical implementation, we decided to adopt the following, modified, radial bias function where the characteristic radius, r ψ , is defined by where ψ is a free parameter to be calibrated against numerical simulations (see §4.2). Note that if Eq. (67) has no solution, e.g., when ψ → +∞, we set r ψ = 0, which corresponds to simply using the fitting function (65) without modification. Density Profile of Dark Matter Haloes We assume that dark matter haloes are spheres whose normalized density distribution is given by the NFW profile (Navarro, Frenk & White 1997), where r * is a characteristic radius and δ200 is a dimensionless amplitude which can be expressed in terms of the halo concentration parameter c ≡ r200/r * as δ200 = 200 3 Numerical simulations show that c is correlated with halo mass. Throughout our work we use the concentration-mass relation of Macciò et al. (2007), properly converted to our definition of halo mass. The Fourier transform of the NFW profile, which features predominantly in our model, is given bỹ where µ ≡ kr * , and Si(x) and Ci(x) are the standard sine and cosine integrals, respectively. Note that this model for dark matter haloes is highly oversimplified. In reality, haloes are triaxial, rather than spherical, have scatter in the concentration-mass relation, have substructure, and may have a density profile that differs significantly from a NFW profile due to the action of baryons. A detailed discussion regarding the impact of these oversimplifications on our results is presented in §5. Radial Number Density Distribution of Satellites Throughout, we assume that satellite galaxies follow a radial number density distribution given by a generalized NFW profile (e.g., van den Bosch et al. 2004): so that us ∝ r −γ and us ∝ r −3 at small and large radii, respectively. Here R and γ are two free parameters, while the scale radius r * is the same as that for the dark matter mass profile (Eq. [68]). For our fiducial model, we adopt R = γ = 1 so that us(r|M ) = u h (r|M ), i.e. satellites are unbiased with respect to the dark matter. For consistency with our definition of halo mass, we only adopt profile (71) out to r200 (i.e., all satellites have halo-centric radii r < r200). Observations of the number density distribution of satellite galaxies in clusters and groups seem to suggest that us(r|M ) is in reasonable agreement with an NFW profile, for which γ = 1 (e.g., Beers & Tonry 1986;Carlberg, Yee & Ellingson 1997a;van der Marel et al. 2000;Lin, Mohr & Stanford 2004;van den Bosch et al. 2005a). However, several studies have suggested that the satellite galaxies are less centrally concentrated than the dark matter, corresponding to R > 1 (e.g., Yang et al. 2005;Chen 2008;More et al. 2009a). On the other hand, in the case of very massive galaxies, in particular luminous red galaxies, there are strong indications that they follow a radial profile that is more centrally concentrated (i.e., R < 1) than the dark matter (e.g., Masjedi et al. 2006;Watson et al. 2010Watson et al. , 2012Tal, Wake & van Dokkum 2012). In Paper III we therefore examine how the results depend on changes in R. Halo Occupation Statistics As specified in §2.2, the halo occupation statistics Nc|M and Ns|M , required to describe the galaxy auto-correlation function and the galaxy-matter cross-correlation function, are obtained from the CLF, We use the CLF model presented in , which is motivated by the CLFs obtained by Yang, Mo & van den Bosch (2008) from a large galaxy group catalog ) extracted from the SDSS Data Release 4 (Adelman- McCarthy et al. 2006). In particular, the CLF of central galaxies is modeled as a log-normal: and the satellite term as a modified Schechter function: which decreases faster than a Schechter function at the bright end. Note that Lc, σc, φ * s , αs and L * s are all functions of the halo mass M . Following Cacciato et al. (2009), and motivated by the results of Yang et al. (2008) and More et al. (2009aMore et al. ( , 2011 we assume that σc, which expresses the scatter in log L of central galaxies at fixed halo mass, is a constant (i.e. is independent of halo mass and redshift). In addition, for Lc, which is defined such that log Lc is the expectation value for the (10-based) logarithm of the luminosity of a central galaxy, i.e. For the satellite galaxies we adopt and . Colored symbols reflect the results obtained from the L250 simulation box. Errorbars (from Poisson statistics) are indicated, but since they are almost always smaller than the symbols, they can only be seen for 2 or 3 data points. The various curves are analytical results for three different values of ψ, as indicated in the lower left-hand panel. Note that the model with ψ = 0.9 accurately reproduces the sharp feature in ξ hm (r), which reflects the 1-halo to 2-halo transition regime. with M12 = M/(10 12 h −1 M⊙). Note that neither of these functional forms has a physical motivation; they merely were found to adequately describe the results obtained by Yang et al. (2008) from the SDSS galaxy group catalog. Our parameterization of the CLF thus has a total of nine free parameters λCLF ≡ (log M1, log L0, γ1, γ2, σc, αs, b0, b1, b2) The final parameter used to describe the halo occupation statistics of the galaxies is AP, defined in Eq. (39). In our fiducial model, adopted here, we will keep this parameter fixed at AP = 1, which corresponds to assuming that satellites follow Poisson statistics. As shown in Yang et al. (2008), this assumption has strong support from galaxy group catalogs. Additional support comes from numerical simulations which show that dark matter subhaloes (which are believed to host satellite galaxies) also follow Poisson statistics ). However, there are also some indications that the occupation statistics of subhaloes and/or satellite galaxies are actually slightly super-Poisson, i.e., AP > ∼ 1 (e.g., Porciani, Magliocchetti & Norberg 2004;Giocoli et al. 2010a;Busha et al. 2011;Boylan-Kolchin et al. 2010). Hence, in Paper III we will also discuss models in which AP is taken to be a free parameter. MODEL TESTS In this section we describe the construction of large mock galaxy distributions, which we use to calibrate and test the real-space galaxy-galaxy and galaxy-matter correlation functions computed using the method outlined in §2.2. In particular, we calibrate the scale dependence of the halo bias and test the accuracy of our halo-exclusion treatment, which we compare to some approximate methods that do not account for halo exclusion but that are frequently used in the literature. In addition, we also use these mock galaxy distributions to test our correction for residual redshift space distortions. Construction of Mock Galaxy Distributions For testing and calibrating the method described in §2 we use two different N -body simulations that have been run us- Fig. 1, errorbars reflecting Poisson statistics are indicated, but are almost always smaller than the symbols. The bottom panels, show the fractional difference between model and mock for the total correlation functions shown in the top panels. The dark and light shaded areas indicate fractional errors of less than 5 and 10 percent, respectively. As is evident, the accuracy of our model is typically better than 5 percent, and always better than 10 percent. ing the adaptive refinement technique (ART) of Kravtsov, Klypin & Khokhlov (1997). Both simulations have been used by Tinker et al. (2008Tinker et al. ( , 2010 in their studies of the halo mass function and halo bias function, where they are called L250 and L1000W. We adopt the same nomenclature in what follows. Simulation L250 follows the evolution of 512 3 dark matter particles in a cubic box of 250h −1 Mpc size in a flat ΛCDM cosmology with matter density Ωm = 0.3, baryon density Ω b = 0.04, Hubble parameter h = 0.7, spectral index ns = 1.0, and a matter power spectrum normalization of σ8 = 0.9. Simulation L1000W follows the evolution of 1024 3 dark matter particles in a 1h −1 Gpc size box in a flat ΛCDM cosmology with matter density Ωm = 0.27, baryon density Ω b = 0.044, Hubble parameter h = 0.7, spectral index ns = 0.95, and a matter power spectrum normalization of σ8 = 0.79. The particle masses are mp = 9.69×10 9 h −1 M⊙ and mp = 6.98 × 10 10 h −1 M⊙ for L250 and L1000W, respectively. For both simulations we use the halo catalogs at z = 0, kindly provided to us by Jeremy Tinker. These haloes are defined as spheres with an overdensity of 200, which is identical to our definition of halo mass (see §2.1). More Fig. 2 but now for the galaxy-matter cross correlations. In the middle row of panels, the 1-halo component is split in the central-matter (purple symbols, labeled '1h[cm]') and satellite-matter (blue symbols, labeled '1h[sm]') parts. Similar to the galaxy-galaxy correlation functions, the accuracy of our model is typically better than 5 percent, and always better than 10 percent. information about these simulations and the identification of its dark matter haloes can be found in Tinker et al. (2008). In what follows we will use the L250 simulation box to calibrate and test our galaxy-galaxy and galaxy-matter correlation functions, while L1000W is used to test our correction for residual redshift space distortions. To this end, we construct mock galaxy distributions by populating the dark matter haloes with model galaxies using the CLF. In particular, we model the CLF using the parameterization described in §3.7 with the following parameters: L0 = 10 9.9 h −2 L⊙, M1 = 10 10.9 h −1 M⊙, σc = 0.16, γ1 = 5.0, γ2 = 0.24, αs = −1.3, b0 = −1.2, b1 = 1.4, and b2 = −0.17. For each halo we first draw the luminosity of its central galaxy from Φcen(L|M ), given by Eq. (73). Next, we draw the number of satellite galaxies, under the assumption that P (Nsat|M ) follows a Poisson distribution (i.e., AP = 1.0) with mean where we adopt a luminosity threshold, Lmin, corresponding to 0.1 Mr − 5 log h = −18 (here 0.1 Mr indicates the SDSS r-band magnitude, K-corrected to z = 0.1; see Blanton et al. 2003). For each of the Nsat satellites in the halo of question we then draw a luminosity from the satellite CLF Φsat(L|M ), given by Eq. (74). Having assigned all mock galaxies their luminosities, the next step is to assign them a position and velocity within their halo. We assume that the central galaxy resides at rest at the center of the halo, while satellite galaxies follow a spherically symmetric number-density distribution proportional to Eq. (71) with R = γ = 1, i.e. we assume that satellite galaxies are unbiased with respect to the dark matter. For the halo concentrations we adopt the concentrationmass relation of Macciò et al. (2007), properly converted to our definition of halo mass. Finally, the peculiar velocities of the satellite galaxies are assigned as follows. We assume that satellite galaxies are in a steady-state equilibrium within their dark matter potential well with an isotropic distribution of velocities with respect to the halo center. One dimensional velocities are drawn from a Gaussian with vj the velocity relative to that of the central galaxy along axis j and σsat(r) the local, one-dimensional velocity dispersion obtained from solving the Jeans equation (see van den Bosch et al. 2004;More et al. 2009b). For reasons that will become clear below, in both simulation boxes we only populate dark matter haloes with masses in the range Mmin ≤ M ≤ Mmax, where Mmin = 10 12 h −1 M⊙ and 10 13 h −1 M⊙ for L250 and L1000W, respectively, while Mmax = 10 14.5 h −1 M⊙ for both L250 and L1000W. Calibrating Scale Dependence of Halo Bias As discussed in §3.4, fitting function (65) for the radial bias is likely to be inaccurate on small scales due to the fact that it was calibrated for a different halo definition than the one used here. To investigate the magnitude of this effect, and to test plausible corrections for it, we compare our model predictions against the L250 simulation box. We start by computing both the halo-halo autocorrelation function, ξ hh (r|M ) and the halo-matter crosscorrelation function, ξ hm (r|M ), for a number of bins in halo mass. We only consider haloes in the mass range 10 12 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙. The lower limit is needed to account for the fact that the simulation has a finite mass resolution, while the upper limit is adopted to be less sensitive to cosmic variance originating from the relatively small volume of the simulation box. Over the mass range 10 12 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙ the halo mass function is in excellent agreement with the fitting function of Tinker et al. (2008), which is also the one used in our analytical calculations. Note that when cross-correlating the haloes with the dark matter particles, we only consider the particles associated with haloes in the mass range 10 12 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙: A large fraction of all dark matter particles in the simulation box are not associated with any dark matter halo, but that is simply a manifestation of the limited (mass and force) resolution of the Nbody simulation. In other words, the L250 simulation does not properly resolve (non-linear) structure on a mass scale M < 10 12 h −1 M⊙, and we therefore do not expect our model to accurately reproduce the halo-matter cross correlation function of the simulation if the cross correlation is with all dark matter. The resulting ξ hh (r|M ) and ξ hm (r|M ) are shown as filled circles in the upper and lower panels of Fig. 1, respectively. The blue, dashed lines are our model results, which are obtained using the same model as for the galaxygalaxy and galaxy-matter correlation functions described in §2.2, but by setting Nc|M = 1 if the halo mass M falls within the halo mass bin in consideration, and Nc|M = 0 otherwise, plus Ns|M = 0 for all M . Note that all integrals over halo mass are only integrated over the range 10 12 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙. Also, when Fourier transforming the power-spectrum to obtain the correlation function, we adopt a lower limit for the wavenumbers in order to account for the fact that the simulation box has a finite size and periodic boundary conditions: specifically, in Eq. (42) we replace the lower limit of the integration range by kmin = √ 3 × (2π/L box ). In this model we have set ψ = +∞, which implies that we have simply adopted the radial bias function of Tinker et al. (2005) without any modification (i.e., ζ(r, z) = ζ0(r, z); see §3.4). The model accurately fits the halo-matter cross correlation functions on both small and large scales. The former indicates that our modeling of the halo density profiles, u(r|M ), is accurate (i.e., we are not making a significant error because we do not account for halo triaxiality, halo substructure and scatter in halo concentration; see §3.5), while the good fit on large scales argues that our treatment of halo bias is adequate. However, the model clearly underpredicts ξ hm (r) at the 1-halo to 2-halo transition regime, which is especially conspicuous in the lower mass bin (lower left-hand panel of Fig. 1). The upper panels clearly indicate that this is a reflection of the fact that the model underpredicts the halo-halo correlation function on small scales (∼ 1h −1 Mpc; just before halo exclusion sets in). The solid and dotted lines are models in which we have used our modified version of the radial bias function (Eq. [66]) with ψ = 0.9 and 0.6, respectively. The former provides the best-fit overall; it somewhat overpredicts the halo-halo correlation function on small scales in the lowest mass bin, but results in excellent fits to the other correlation functions. The model with ψ = 0.6, on the other hand, clearly overpredicts the small scale clustering of the dark matter haloes for all mass bins. Detailed tests, including additional halo mass bins and other functional forms for a modified ζ(r, z), indicate that Eq. (66) with ψ = 0.9 yield the best results, while still allowing for a sufficiently fast numerical evaluation. We have also experimented with the modification suggested by Tinker et al. (2012; see their Appendix A), which is identical to Eq. (66), except that they adopt r ψ = r200(M1) + r200(M2) rather than Eq. (67). Not only do we find this method to be less accurate, especially for the lower mass bins, but the dependence of r ψ on halo mass also makes the evaluation of Q(k|M1, M2, z) more CPU intensive. Note though, that there is no guarantee that ψ = 0.9 is also the best-fit parameter for any cosmology other than the one considered here. Hence, if we simply adopt ψ = 0.9 when trying to constrain cosmological parameters, we might introduce an unwanted systematic bias. Fortunately, as we demonstrate in Paper II, ψ is only weakly degenerate with the cosmological parameters; most of its degeneracy is with the parameters that describe the satellite CLF. Hence, errors in ψ may result in systematic errors in the inferred satellite fractions, but will not significantly bias our constraints on cosmological parameters. Nevertheless, in order to be conservative, we will marginalize over uncertainties in ψ when fitting for cosmological parameters (see Paper III). Testing Halo Exclusion Having calibrated the scale dependence of the halo bias, we now proceed to test the accuracy of our model in calculating ξgg and ξgm, focusing in particular on the accuracy of our treatment of halo exclusion. Using the mock galaxy distribution (hereafter MGD) of the L250 simulation box, we first compute the real-space correlation function for three different luminosity bins. The orange filled circles in the upper panels of Fig. 2 show the results thus obtained. In the panels in the middle row, we show the contribution to ξgg(r) from the 2-halo term (green filled circles), the 1-halo central-satellite term (purple filled circles) and the 1-halo satellite-satellite term (blue filled circles). In the high-luminosity bin (right-hand panels), the galaxy-galaxy correlation function is dominated by the 1halo central-satellite term on small scales (r < ∼ 0.3h −1 Mpc), and by the 2-halo term on large scales (r > ∼ 1.0h −1 Mpc). On intermediate scales, the 1-halo satellite-satellite term dominates. Note how this term becomes more and more dominant for less luminous galaxies; in fact in the lowest luminosity bin considered here (left-hand panels), the 1-halo satellite-satellite term completely dominates the signal for r < ∼ 1h −1 Mpc. This reflects the fact that the satellite fraction increases drastically from fsat ≃ 0.136 for the brightest bin, to fsat ≃ 0.465 for the intermediate luminosity bin, to fsat ≃ 0.996 for the faintest bin. Note, though, that these satellite fractions are unrealistic due to the adopted cutoffs in halo mass at M = 10 12 h −1 M⊙ and 10 14.5 h −1 M⊙. For example, for the CLF adopted here, virtually all central galaxies with r-band magnitudes (K-corrected to z = 0.1) in the range −18 ≥ 0.1 Mr − 5 log h ≥ −19.5 reside in haloes with M < 10 12 h −1 M⊙, which are not accounted for in our MGD; hence, almost all mock galaxies in this magnitude range are satellites. For comparison, if we were to integrate our CLF over the entire mass range from M = 0 to M = ∞, the corresponding satellite fractions, given by are equal to fsat = 0.334, 0.253, and 0.167 from the faintest to the brightest bin, respectively. Although the trends seen in Fig. 2 are stronger than what is expected in reality, we consider the fact that the dynamic range in fsat covered is unrealistically large beneficial for the purpose of testing the accuracy of our model. The solid lines in the panels in the upper and middle rows of Fig. 2 are the analytical results obtained using our fiducial model with halo exclusion and with ψ = 0.9. Here we have adopted the same cosmology, redshift and CLF parameters as for the MGD. Note that, once again, all integrals over halo mass are only integrated over the range 10 12 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙, and we adopt kmin = √ 3 × (2π/L box ) for the integration range in Eq. (42). Overall the agreement between our analytical prediction and the results from the MGD is extremely good. As is evident from the panels in the middle row, our treatment of halo ex- clusion nicely captures the sudden decline of the 2-halo term on small scales. Although the analytical 2-halo term becomes less accurate for r < ∼ 0.5h −1 Mpc, mainly due to numerical issues, at these small scales the 1-halo term always dominates the total correlation function by at least an order of magnitude. Hence, this inaccuracy is of little practical concern. This is evident from the lower panels were we plot the difference between the model prediction and the true correlation function in the mock, normalized by the latter, as function of radius. Over the entire range 0.01h −1 Mpc ≤ r < ∼ 10h −1 Mpc the model predictions agree with the mock results to an accuracy of a few percent (typically < 5%). At the 1-halo to 2-halo transition scale (r ≃ 1h −1 Mpc), which has been notoriously difficult to model accurately, the errors are somewhat larger but always stay below 10%. Fig. 3 shows the same as Fig. 2, but now for the galaxymatter cross correlation, ξgm(r). Similar trends are evident; the model's 2-halo term becomes less accurate on small scales, but this has little to no impact on the quality of the model as is evident from the lower panels. As for the galaxy-galaxy correlation function, the model agrees with the simulation results at the few percent level. In particular, it is noteworthy that the model is accurate at better than 10 percent on small scales. This indicates that non-sphericity of haloes, scatter in halo concentration, and halo substructure, all of which are ignored in our model, do not have a large ( > ∼ 10 percent) impact on the results (see §5 for a detailed discussion). Testing the Approximate Linear Model As we have demonstrated above, our implementation of halo exclusion and scale dependence of the bias are accurate at the few percent level. However, the required computation of Q(k|M1, M2, z), defined in Eq. (20), is fairly CPU intensive. The computation of wp(rp) and ∆Σ(R) for six luminosity bins (i.e., a single model; see paper III) takes ∼ 20 seconds on a single (fast) processor. Consequently, the construction of an adequate Monte Carlo Markov Chain (which has to be large given that our model has anywhere from 14 to 19 free parameters, depending on the priors used) takes several days to complete (on a single processor). Although this is not a major challenge in light of the fact that most desktop computers nowadays have multiple processors, it nevertheless would be hugely advantageous if a much faster, approximate method could be found. In particular, the code can be made much faster if we were to ignore halo exclusion and/or the scale dependence of the halo bias. In this section we therefore investigate the pros (increase in speed) and cons (decrease in accuracy) of two different simplifications of our model. The first simplification is to ignore halo exclusion, i.e., we set rmin = 0 in Eq. (11). In that case we have that ξ hh (r, z|M1, M2) = b h (M1, z) b h (M2, z) ζ(r, z) ξmm(r, z), and the two-halo term of the power spectrum (33) simplifies to P 2h xy (k, z) = bx(k, z) by(k, z) Pne(k, z) , where 'x' and 'y' are either 'c' (for central), 's' (for satellite), or 'm' (for matter), with Hx(k, M, z) given by Eqs. This simplified model has the great advantage that it does not require the tedious and CPU intensive evaluation of Q(k|M1, M2, z), causing a speed-up of a factor ∼ 10, while still accounting for the scale dependence of the halo bias. In what follows we shall refer to this model as the 'no-exclusion model'. The solid lines in Fig. 4 show the relative error in ξgg(r) of the no-exclusion model with respect to our fiducial model with halo exclusion. Results are shown for three magnitude bins, as indicated in the top panels, and for two different cosmologies/CLFs. In the upper panels we use the same cosmology and CLF as for the mocks described in §4.1. In the lower panels we use the WMAP3 cosmology, i.e., the cosmological parameters that best fit the three year data release of the Wilkinson Microwave Anisotropy Probe (Spergel et al. 2007) and the best-fit CLF model for that cosmology obtained by Cacciato et al. (2009). The main motivation for showing results for two different cases is to emphasize that the fractional errors of the no-exclusion model may vary quite significantly from one cosmology and/or CLF to another. Clearly the no-exclusion model in general overpredicts the galaxy-galaxy correlation functions on small scales (r < ∼ 2h −1 Mpc) by 20 to 50 percent ¶ . At the risk of further deteriorating the accuracy of the model, we can make additional simplifications by replacing Pne(k, z) in Eq. (84) by the linear matter power spectrum, P lin mm (k, z). This results in the 'linear' halo model, which has been used previously by numerous authors (e.g., Ma & Fry 2000;Seljak 2000;Scoccimarro et al. 2001;Guzik & Seljak 2002;Mandelbaum et al. 2005;Seljak et al. 2005; see also Sheth 2002 andMo et al. 2010). This removes the need for the integration (86) and therefore further speeds up the computation, albeit at the cost of ignoring the scale dependence of the halo bias. The dashed curves in Fig. 4 show how these 'linear' galaxy-galaxy correlation functions compare to the fiducial model with halo exclusion and with scale dependence of halo bias. Somewhat surprisingly, for the cosmology+CLF shown in the upper panels, this linear model performs significantly better than the no-exclusion model, with errors that are always below 10 percent. This indicates that halo-exclusion and scale-dependence of halo bias have comparable but opposite effects on small scales (r < ∼ 1h −1 Mpc), which may roughly cancel each other. The lower panels, however, show that this is not always the case, and that the linear model can significantly underestimate the galaxy-galaxy correlation functions (by as much as 30-40 percent) in the 1-halo to 2-halo transition regime. In addition, the linear model typically overpredicts the correlation power on large scales of ∼ 10h −1 Mpc by 10 percent. This is a well known effect that has already been discussed in numerous studies of the halo model (e.g., e.g., Ma & Fry 2000;Seljak 2000;Scoccimarro et al. 2001;Smith et al. 2003;Cole et al. 2005;Hayashi & White 2008). Finally we note that similar tests for the galaxy-matter cross correlation functions yield fractional errors for the no-exclusion and linear models that are very similar as for the galaxy-galaxy correlation functions shown in Fig. 4. Hence, despite the order of magnitude increase in computational speed, we conclude that both the 'no-exclusion' model and the 'linear' model suffer from systematic inaccuracies that can easily reach 30 to 40 percent, which we consider inadequate for the purpose of constraining cosmological parameters. In Papers II and III we therefore exclusively use the much more accurate, but more CPU intensive, ¶ The sharp features apparent around 0.3h −1 Mpc are not due to numerical noise, but are real manifestations of halo exclusion. model described in §2 above, which properly accounts for both halo exclusion and scale dependence of the halo bias. Redshift Space Distortions As discussed in §2.3, the projected correlation functions used to constrain the models have been obtained using a finite range of integration along the line-of-sight. Consequently, they suffer from residual redshift space distortions (RRSDs) that need to be corrected for. In this section we investigate the magnitude of these RRSDs, as well as the accuracy of our correction method, which is based on the linear Kaiser formalism (Kaiser 1987). To that extent we use the mock galaxy distribution (MGD) obtained from the L1000W simulation box, as described in §4.1. We first use this MGD to compute the projected correlation function, wp(rp), for three luminosity bins, by integrating the corresponding ξgg(rp, rπ) out to rmax = 40h −1 Mpc . Note that this is the same value of rmax as used by Zehavi et al. (2011) for computing the projected correlation functions of faint galaxies in the SDSS DR4. Next we compute the same wp(rp), but this time we set the peculiar velocities of all galaxies to zero, i.e., we simply set rπ = r 2 − r 2 p , where r is the real-space separation between two galaxies. The ratio of these two 'measurements' of the projected correlation function, shown as filled circles in Fig. 5, indicates the error one makes in the estimate of wp(rp) when ignoring the RRSDs, i.e., when computing wp(rp) using wp(rp) = 2 rout rp ξgg(r) r dr with rout = r 2 p + r 2 max . As discussed in §2.3, this is the standard method used by numerous authors in the past. The MGD results in Fig. 5 show that ignoring RRSDs causes an error in wp(rp) that exceeds 10 percent on scales > ∼ 10h −1 Mpc. Note, though, that in the MGD we only populated haloes in the mass range 10 13 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙. As we show below, using the full mass range results in RRSDs that are even larger. The dashed line indicates the correction factor fcorr given by Eq. (48). This correction factor is based on the Kaiser formalism for the linear velocity field, and is computed using the linear galaxy-galaxy correlation function given by Eq. (49). Note that the resulting fcorr provides a fairly accurate description of the RRSDs resulting from using a finite rmax, at least at large scales. However, on small scales it clearly overpredicts fcorr by a few percent. Hence, using this correction factor would overpredict wp(rp) by a similar amount on small scales. The solid line shows the correction factor obtained by simply replacing ξ lin gg (r) in Eq. (48) and Eqs. (51)-(55) by the non-linear version ξgg(r). Although the Kaiser formalism is strictly only valid in the linear regime, this simple modification works remarkably well; the model now accurately reproduces the mock results on small scales. On larger scales, the model somewhat overpredicts fcorr compared to the mock results. From the ratio between the two we estimate that Here we have assumed that the plane-parallel approximation holds Figure 6. The RRSD correction factor, fcorr(rp), for different values of the integration range rmax, as indicated. All these correction factors have been obtained for galaxies with −21 ≤ 0.1 Mr − 5 log h ≤ −19.5, assuming the same cosmology and CLF as for the L1000W mock (i.e., similar to the middle column in Fig. 5). Note that fcorr for rmax = 40h −1 Mpc is larger than in the case of Fig. 5; this is due to the fact that here we integrate over all halo masses, whereas in Fig. 5 we only considered haloes with 10 13 h −1 M ⊙ ≤ M ≤ 10 14.5 h −1 M ⊙ in order to allow for a fair comparison with the mock results. Note also that even for rmax = 200h −1 Mpc the correction factor exceeds 5 percent for rp > ∼ 30h −1 Mpc. the final error we make on wp(rp) from the imperfect correction for RRSDs is always less than 2 percent over the scales of interest. Finally, having demonstrated that fcorr(rp, z), obtained using the non-linear galaxy-galaxy correlation function, provides an accurate description of the RRSDs that arise from using a finite integration range, we can use it to predict the magnitude of RRSDs for different values of rmax. Fig. 6 shows fcorr(rp) for five different values of rmax, as indicated. Contrary to the results shown in Fig. 5, which only considered haloes in the mass range 10 13 h −1 M⊙ ≤ M ≤ 10 14.5 h −1 M⊙ in order to allow for direct comparison with the mock results, the results in Fig. 6 have been obtained by integrating over all halo masses. Note that this results in fcorr values for rmax = 40h −1 Mpc that are significantly larger than those in Fig. 5. In particular, using rmax = 40h −1 Mpc without a correction for RRSDs, underestimates wp(rp) at rp = 20h −1 Mpc by ∼ 35 percent! Even when using rmax = 200h −1 Mpc, the RRSDs causes errors in the projected correlation function that exceed 5 percent for rp > ∼ 30h −1 Mpc. Clearly, correcting for RRSDs is extremely important, especially when using projected correlation functions to constrain cosmological parameters. The modified Kaiser method presented here corrects for these RRSDs to an accuracy of better than 2 percent. SHAPES, ALIGNMENT, SUBSTRUCTURE AND CONTRACTION OF DARK HALOES As discussed in §3.5, our model assumes that dark matter haloes are spheres with an NFW density profile. Clearly, this is a highly oversimplified picture. In reality, dark matter haloes are triaxial, have substructure, and have a density profile that may have been modified due to the action of galaxy formation. In addition, our model ignores the fact that there is significant scatter in the relation between halo mass and halo concentration. After discussing how each of these effect impacts the accuracy of our oversimplified model, we show how we can take these shortcomings into account by marginalizing over the normalization of the concentration-mass relation of dark matter haloes. Halo Shapes and Alignment The assumption that dark matter haloes are spherical is inconsistent with expectations based on numerical simulations (e.g., Bailin & Steinmetz 2005;Allgood et al. 2006) and/or non-spherical collapse conditions (e.g., Zel'dovich 1970;Icke 1973;White & Silk 1979). As shown by Yang et al. (2004), assuming that haloes are spherical underestimates the correlation function obtained if haloes are represented by FOF groups in numerical simulations by as much as ∼ 20 percent on small scales (r ∼ 0.1h −1 Mpc). A similar test was recently performed by van Daalen, Angulo & White (2011), who basically came to the same conclusion. However, these tests of the impact of halo triaxiality are not directly applicable to our model. After all, our model uses halo mass functions and halo bias functions in which haloes are specifically defined as spherical volumes. Hence, a fair assessment of the impact of the non-spherical symmetry of dark matter haloes on our results should compare a correlation function in which it is assumed that all matter within the spherical volume of the halo has spherical symmetry (i.e., our model assumption) to one in which the dark matter particles and galaxies within the same spherical volume are given a more realistic distribution that is not spherically symmetric. Note that this is not the same as a comparison of spherical haloes to FOF haloes, since the latter typically do not occupy a spherical volume. As demonstrated by More et al. (2012, in preparation), this yields correlation functions that only differ at the 5 to 10 percent level. Detailed theoretical calculations by Smith & Watts (2005) reach a similar conclusion, that ignoring halo triaxiality only impacts the two-point correlation functions at the level of ∼ 5 percent. This is also consistent with Li et al. (2009), who performed detailed tests that showed that non-sphericity of dark matter haloes has only a small effect of < ∼ 5 percent on the excess surface densities, and only on the smallest scales probed by the data. Hence, we conclude that our model assumption that haloes are spherical may underpredict both ξgg(r) and ξgm(r) on small scales (r < 1h −1 Mpc), but by no more than ∼ 10 percent. However, the fact that haloes have triaxial, rather than spherical shapes, also implies that another effect might in principle be important, namely halo alignment. Such potential alignment between haloes is not accounted for in our model, which therefore might cause systematic errors in our two-point correlation functions. However, Smith & Watts (2005) have shown that a strict upper bound for the effect of intrinsic alignment is a 10 percent effect on the two-point correlation function (corresponding to a scenario with maximum alignment). Van Daalen et al. (2011) have shown that realistic amounts of alignment, as present in numerical simulations of structure formation in a ΛCDM cosmology, has an effect on the correlation functions that is not larger than ∼ 2 percent. We therefore conclude that potential halo alignment can be safely ignored. Halo Concentrations As discussed in §3.5, we assume that dark matter haloes have NFW density profiles with a concentration-mass relation given by Macciò et al. (2007), properly converted to our definition of halo mass. This ignores, however, that there is a substantial amount of scatter in the concentration-mass relation. In particular, numerical simulations show that the concentrations, c, for haloes of mass M at redshift z follow a log-normal distribution wherec =c(M, z) is the median halo concentration for a halo of mass M at redshift z, and σ lnc ≃ 0.3 (Jing 2000;Bullock et al. 2001;Wechsler et al. 2002;Sheth & Tormen 2004;Macciò et al. 2007). Because of this scatter, the proper u h (k|M, z) to use in the halo model is (Giocoli et al. 2010b). However, in order to speed up the computations, we ignore this scatter and simply usẽ u h (k|M, z) =ũ h (k|c(M, z)) instead. The impact of this oversimplification is shown in Fig. 7, where the symbols showũ h (k|M, z)/ũ h (k|c(M, z)) − 1, with u h (k|M, z) given by Eq. (89). Results are shown for three different values of σ lnc , as indicated, and are obtained using M = 10 12 h −1 M⊙ andc = 10. Taking the scatter in halo concentration into account boostsũ h (k) on small scales (k > ∼ 10h Mpc −1 ) by an amount that increases with σ lnc (see also Hu 2001 andGiocoli et al. 2010b). For σ lnc = 0.3 this boost is of the order of 10 percent. The solid lines in Fig. 7 showũ h (k|c)/ũ h (k|c) − 1, where c =c (1 + 0.8σ 2 lnc ). Although certainly not a perfect fit, this simple relation gives a reasonable description of the impact of ignoring the scatter in p(c|M, z). It shows that for σ lnc = 0.3, the error made ignoring this scatter is similar to the error made ifc(M, z) is underestimated by a factor 1 + 0.8σ 2 lnc ≃ 1.07. This is comparable to the differences in thec(M, z) relation obtained by different authors (e.g., Eke, Navarro & Steinmetz 2001;Bullock et al. 2001;Macció et al. 2007;Zhao et al. 2009). Hence, it is at least as important to obtain a more reliable calibration of the median of p(c|M, z) than to take account of its scatter. As we discuss in §5.5 below, because of these uncertainties, and because of other oversimplifications of our model, we will marginalize over the normalization of the concentration-mass relation,c(M, z), when constraining cosmological parameters (see Paper III). The results shown here indicate that such a marginalization also captures the inaccuracies arising from the fact that we ignore the scatter in p(c|M ). [89]), whileũ h (k|c) is the normalized density profile for the median halo concentration,c. Hence, this ratio indicates the error made inũ h (k|M ) when ignoring the scatter in halo concentration. The solid lines show the same ratio, but this timẽ u(k|M ) is computed under the assumption of zero scatter, and using a concentration parameter c =c (1 + 0.8σ 2 lnc ). The reasonable agreement with the open symbols indicates that, to good approximation, one can mimic the effect of non-zero scatter in P (c|M ) by simply computingũ(k|M ) for a halo concentration that is a factor 1 + 0.8σ 2 lnc larger than the median concentration. Halo Substructure Another oversimplification of our model is that we assume that dark matter haloes have a smooth density distribution. However numerical simulations of hierarchical structure formation have shown that haloes are not smooth, but have a significant population of dark matter subhaloes (e.g., Moore et al. 1998;Springel et al. 2001). Approximately 10 percent of the mass of a dark matter halo is associated with these subclumps, with a weak dependence on halo mass and cosmology (e.g., Gao et al. 2004;van den Bosch et al. 2005b;Giocoli et al. 2008Giocoli et al. , 2010a. Since these subhaloes are believed to host satellite galaxies, they will impact the galaxymatter cross correlation function on small scales. Li et al. (2009), the impact of substructure is negligible on the radial scales of interest, i.e., on the scales for which we currently have data on ∆Σ(R) available (R > ∼ 0.05h −1 Mpc). Hence, we conclude that we do not make significant errors by ignoring dark matter substructure. The Impact of Baryons Although numerical simulations of structure formation have established that dark matter haloes follow a universal profile that is accurately described by the NFW profile (Eq. [68]), this ignores the impact of baryons. During the process of galaxy formation, baryons collect at the center of the halo potential well and may subsequently be expelled due to feedback processes. Because of the gravitational interaction between baryons and dark matter, the dark matter halo will respond to this galaxy formation process. It is often assumed that the impact of baryons is to cause (adiabatic) contraction of the dark matter haloes (e.g., Blumenthal et al. 1986;Gnedin et al. 2004;Abadi et al. 2010; see also Schulz, Mandelbaum & Padmanabhan 2010; for observational support). However, it is also possible for haloes to expand in response to galaxy formation; rapid mass-loss from the galaxy due to (repetitive) feedback from supernovae and/or AGN (e.g., Pontzen & Governato 2012), dynamical friction operating on baryonic clumps (e.g., El-Zant, Shlosman & Hoffman 2001; Mo & Mao 2004), and galactic bars (e.g., Weinberg & Katz 2002) all may cause dark matter haloes to become less centrally concentrated than their 'pristine' (i.e., without galaxy formation) counterparts. Interestingly, both galaxy rotation curves and galaxy scaling relations suggest that dark matter haloes are less centrally concentrated than what is expected in the absence of baryonic processes in a CDM dominated universe (e.g., Swaters et al. 2003;de Blok et al. 2008;Dutton et al. , 2011Trujillo-Gomez et al. 2011). Although this may suggest that galaxy formation indeed results in a net halo expansion, it may also indicate that dark matter is not dark, but warm (e.g., Sommer-Larsen & Dolgov 2000) or self-interacting (e.g., Spergel & Steinhardt 2000). We conclude that the detailed density profiles of dark matter haloes carry a significant uncertainty, which needs to be accounted for. Marginalization All the effects discussed above, regarding halo shape, scatter in halo concentrations, halo substructure, and halo contraction/expansion, impact the 1-halo terms of the correlation functions by either boosting or suppressing power on small scales. What is ultimately of importance for the accuracy of our models is the combined impact of all these effects. The combined impact of all effects except for that of halo contraction/expansion can be gauged from the lower panels of Fig. 3, which show that our model is consistent with the simulation results, in which the haloes have realistic, triaxial density distributions, have substructure, and have non-zero scatter in the concentration-mass relation, to better than 10 percent. This test therefore confirms that our oversimplifications are accurate at the 10 percent level. We caution, though, that this test does not account for possible halo contraction/expansion due to baryons, whose impact is difficult to gauge in the absence of a more detailed Figure 8. The impact on the galaxy-matter cross correlation function, ξgm(r) of multiplying the normalization of the concentration-mass relation, c(M ), of dark matter haloes by a factor (1 + η), where η = ±0.1 (dashed lines) or η = ±0.2 (solid lines). Here we have, once again, adopted the same cosmology and CLF as for the mocks described in §4.1. understanding of galaxy formation. Hence, when constraining cosmological parameters (see Paper III), we will take all these oversimplifications regarding the density distributions of dark matter haloes into account by marginalizing over the normalization of the concentration-mass relation,c(M, z). In particular, we introduce the parameter η, so that the concentration for a halo of mass M is given by (1 + η) ×c(M, z), wherec(M, z) is the average concentration-mass relation of Macciò et al. (2007), properly converted to our definition of halo mass. As a prior we assume that the probability distribution function (PDF) for η is given by where we adopt ση = 0.1. Fig. 8 shows the impact of η on the galaxy-matter cross-correlation function for galaxies with magnitudes in the range −18 ≥ 0.1 M r −5 log h ≥ −19.5 (results for other magnitude bins are very similar). The dashed and solid lines show the fractional changes in ξgm(r) for η = ±0.1 and ±0.2, respectively, which correspond to the 68 and 95 percent confidence intervals of the prior PDF. Note how η = ±0.2 modifies the one-halo term of ξgm(r) by more than 20 percent on small scales (r < 0.1h −1 Mpc), which we argue is more than adequate to capture the inaccuracies in our model that arise from the various oversimplifications discussed above (see Paper III for more details, and for a discussion of the posterior distribution of η and its implications). CONCLUSIONS Galaxies are abundant and visible to high redshifts, making them, in principle, excellent tracers of the mass distribution in the Universe over cosmological scales. The problem, however, is that galaxies are biased tracers, and that this bias is a complicated function of scale, luminosity, morphological type, etc. It is an imprint of the poorly understood physics related to galaxy formation. On sufficiently large scales, galaxy bias is expected to be scale-independent with a value that is known to depend on a variety of galaxy properties such as luminosity and color (e.g., Norberg et al. 2001Norberg et al. , 2002Zehavi et al. 2005Zehavi et al. , 2011Wang et al. 2007). On small, (quasi) non-linear scales (r < ∼ 3h −1 Mpc), galaxy bias becomes strongly scale-dependent (e.g., Cacciato et al. 2012a), making it extremely difficult to infer any constraints on cosmology, without having a proper, detailed method of either measuring this bias or marginalizing over it. For this reason, almost all studies to date that used the distribution of galaxies in order to constrain cosmological parameters have focused on large, linear scales, and treated galaxy bias as a 'nuisance parameter' that needs to be marginalized over. In this paper, the first in a series, we have presented a new method, similar to that of Yoo et al. (2006) and Leauthaud et al. (2011), that can simultaneously solve for cosmology and galaxy bias on small, non-linear scales. The method uses the halo model to analytically describe the (non-linear) matter distribution, and the conditional luminosity function (CLF) to specify the halo occupation statistics. For a given choice of cosmological parameters, which determine the halo mass function, the halo bias function, and the (non-linear) matter power spectrum, this model can be used to predict the galaxy luminosity function, the two-point correlation functions of galaxies as function of both scale and luminosity, and the galaxy-galaxy lensing signal, again as function of both scale and luminosity. These are all observables that have been measured at unprecedented accuracies from the Sloan Digital Sky Survey, and can therefore be used to constrain cosmological parameters. In this paper we presented, in detail, our analytical framework, which is characterized by • a treatment for scale dependence of halo bias on small scales, using a modified version of the empirical fitting function of Tinker et al. (2005). • a proper treatment for halo exclusion, similar to that of Smith et al. (2007), which is correct under the assumption that dark matter haloes are spherical. • a correction for residual redshift space distortions (RRSDs) using a slightly modified version of the linear Kaiser formalism. We have tested the accuracy of our analytical model using detailed mock galaxy distributions, constructed using highresolution numerical N -body simulations. We have shown that our analytical model is accurate to better than 10 percent (in most cases better than 5 percent), in reproducing the 3-dimensional galaxy-galaxy correlation and the galaxy matter correlation in the mock galaxy distributions over a wide range of scales (0.03h −1 Mpc < ∼ r < ∼ 30h −1 Mpc). In order to reach this level of accuracy we had to introduce, and tune, one free parameter that describes a modification of the empirical fitting function of Tinker et al. (2005) for the radial halo bias dependence. This modification is required because this fitting function is only valid for a particular definition of halo mass that is different than the one adopted here (see also Tinker et al. 2012). When fitting the data in order to constrain cosmological constraints, we will marginalize over uncertainties in this free parameter (see Papers II and III). We have demonstrated that ignoring halo exclusion and/or the scale dependence of the halo bias results in errors in ξgg(r) and ξgm(r) in the 1-halo to 2-halo transition regime (r ∼ 1h −1 Mpc) that can easily be as large as 40 percent. The correction for RRSDs is necessary because projected correlation functions are always obtained by integrating along the line-of-sight out to a finite radius (typically rmax ∼ 40 − 80h −1 Mpc) rather than out to infinity. In agreement with the results of Norberg et al. (2009), we show that not taking these RRSDs into account results in systematic errors that can easily exceed 20 percent on large scales (rp > ∼ 10h −1 Mpc), which can cause systematic errors in the inferred galaxy bias (see More 2011). As we demonstrate in Paper III, when unaccounted for these RRSDs can also result in significant systematic errors in the inferred cosmological parameters. Fortunately, as we have demonstrated, it is fairly straightforward to correct for these RRSDs, to an accuracy better than ∼ 2 percent, using a mildly modified version of the linear Kaiser formalism (Kaiser 1987). Finally, the good accuracy of our analytical model on small scales for the galaxy-matter and halo-matter cross correlation functions (better than 10 percent) indicates that ignoring halo triaxiality, halo substructure, and scatter in the halo concentration-mass relation does not have a large impact, contrary to recent claims by van Daalen et al. (2011) who argue that halo triaxiality alone may cause inaccuracies as large as 20 percent. We argue that this apparent discrepancy mainly owes to different definitions of dark matter haloes (see discussion in § 5.1). Nevertheless, we have shown that, in order to be conservative, one can take these inaccuracies that arise from oversimplifications of the halo mass distributions into account by marginalizing over uncertainties in the normalization of the concentration-mass relation of dark matter haloes. As indicated above, this is the first paper in a series. In Paper II (More et al. 2012a), we perform a Fisher matrix analysis to (i) investigate the strength of each of the datasets (luminosity function, projected correlation functions, and excess surface densities), (ii) identify various degeneracies between our model parameters, and (iii) forecast the accuracy with which various cosmological parameters and CLF parameters can be constrained with current data. In Paper III (Cacciato et al. 2012b) we apply our method to data from the Sloan Digital Sky Survey and present the resulting constraints on both cosmological parameters (fully marginalized over the uncertainties related to galaxy bias) and the CLF parameters (fully marginalized over uncertainties in cosmological parameters). ACKNOWLEDGMENTS The work presented in this paper has greatly benefited from discussions with Matthew Becker, Alexie Leauthaud, Nikhil Padmanabhan, Eduardo Rozo, Roman Scoccimarro, Jeremy Tinker, Risa Wechsler, Idit Zehavi and Zheng Zheng. The analysis of numerical simulations used in this work has been performed on the Joint Fermilab -KICP Supercomputing Cluster, supported by grants from Fermilab, Kavli Institute for Cosmological Physics, and the University of Chicago. FvdB acknowledges support from the Lady Davis Foundation for a Visiting Professorship at Hebrew University.
2012-06-28T20:00:39.000Z
2012-06-28T00:00:00.000
{ "year": 2013, "sha1": "bc64088b693ea1ef25be1e30340a16a85f89e3b1", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/430/2/725/9375336/sts006.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "bc64088b693ea1ef25be1e30340a16a85f89e3b1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
151152624
pes2o/s2orc
v3-fos-license
Legal pluralism, social theory, and the state Abstract Legal pluralism has seen a marked rise in interest since the turn of the century. While long rejected in legal studies, legal pluralism is now widely accepted, not least in light of the broad range of perspectives on the state it has sought to interpret and it has produced. A crucial change could be noted in the 1970s, when legal anthropologists began to demonstrate the applicability of this term, and not just in anthropological thinking about law. Political and economic developments also profoundly changed constellations of legal pluralism, following diverse trajectories in which the concept obtained multiple meanings. While highlighting significant stages of this process, this chapter discusses how anthropological insights in law and legal pluralism metamorphosed from the study of law in colonial societies towards law of a widely varying scope under conditions of ever-increasing global connectedness. The epistemological insights drawn from the diverse trajectories reflect and shape the social theories of the time, where intersections of state and law represent a central theme, albeit to a greater extent in some periods, and virtually absent in others. Introduction Long rejected in legal studies, the term legal pluralism has seen a remarkable rise of interest since the turn of the century. Now legal pluralist approaches both reflect and produce new perspectives on the role of the state in plural legal orders. This article focuses on the role of state law and discusses significant stages in the development of research on legal pluralism. The study of law and legal pluralism occurred in dialog with various strands of social theories, ranging from highly abstract theories, such as evolutionist theories, to general theories of social phenomena, such as structuralism, functionalism, and actor-oriented approaches, such as structuration, social constructivism, and actor network theory. More recently, this list has also come to include social theories concerned with specific political and economic development, such as globalization theories. Such theoretical tools proved essential for analytical frameworks to be able to address the anthropological dimensions and in the context of global modernity to direct the analytical gaze to everyday instances where law, the state, and the social interact. We highlight some intermediary stages in the development of general social theory that display simultaneous, interacting and co-constituting trajectories and the emergence of various approaches to studying the role of the state and its law in plural legal configurations. 1 We argue that the analysis of these simultaneities sheds light on modifications that have had an impact on how the state is viewed in legal pluralism. We proceed from the understanding that there is no impermeable disciplinary demarcation between anthropology and other social sciences, nor do we see the anthropology of law as a closed sub-discipline (F. von Benda-Beckmann 2005). While the focus of this chapter is on anthropological understandings, where appropriate, we include disciplines that are intimately linked with anthropology. However, what distinguishes anthropological work from other social science studies is its inherent focus on everyday practices in (multi-)local situations and the aim to capture the terms by which people understand their social life. Legal complexity in early history Although a fairly recent concept in the social sciences, the phenomenon of legal pluralism has persisted throughout history. It has provided the very "condition of possibility" for pre-modern empires and thus has been part of a normative logic of statehood. Such a model of statehood also referenced the diversity of its constitutive people in terms of normativity. All early empires recognized this and dealt with it in pragmatic ways. In a post-Westphalian world order, two entangled strands of politico-legal development gained momentum, the nation state and its political counterpart, the imperial and colonial state. This development entailed the axiomatic shift from "where there is society, there is law" to "where there is state, there is law." Only with the establishment of nation states and ideologies that canonized the state-people-law nexus in the nineteenth century, the prevalence of legal pluralism came to be seen as problematic. This coincided with theories of modernity and evolutionist conceptions of social organization, development and linear progress, and that included imperialism and colonialism. While emerging nation states sought to eliminate all traces of legal pluralism in domestic legal ideology, though it continued to exist unabated in practice, in colonial states, the realities of legal pluralism needed to be acknowledged, not least as an administrative necessity. In this context, colonial empires began to distinguish "modern" law from customs, tradition, and "primitive" law. In line with evolutionist thinking, scholars began to systematically collect and compare "precursors" of modern law. But this issue was not merely of theoretical interest. It was also of high political relevance, because according to legal doctrine, customs and traditionin contrast to customary or traditional lawcould be disregarded at the whim of the administration. Colonial experiences incited lawyers and administrators to consider whether existing normative orders in the colonies could be characterized as law or as mere custom. Scholars of different provenience also developed an interest in local laws. There was greater "disciplinary job sharing" among scholars of different disciplines than today (Turner 2017, 291-295). Most scholars adopted evolutionary theories aimed at a universal theory of law. 2 These theories presumed that early models of order had been based on feuds and retaliation, genealogical relationships, and on communal ownership. These evolved into hierarchical societies with property regimes based on individual ownership. But it was only with the emergence of states that law assumed its full role of maintaining order. Circumstances such as these provided the setting in which anthropology emerged as a discipline, whereby "anthropology of law," as one of its constitutive sections, developed in dialogue with legal history and legal philosophy. Towards the end of the nineteenth century, a comparative interest in uncodified or "primitive" law emerged as precursors of modern law, as orally transmitted remnants of the earliest forms of law. The study of these laws made early forms methodologically accessible. Following the standard anthropological methods of the time, questionnaires were designed for various colonial territories, based on which colonial administrators, missionaries, judges, and travelers were expected to generate descriptions of the local and regional laws. The standard format would enable systematic comparison of different manifestations of "primitive law." However, the fact that the questionnaire was based on European legal systems led to serious misrepresentations. Thus within the ambit of disciplines concerned with a universal legal history of humankind, the aim of legal anthropology was twofold: On the one hand, to research phaseologically pre-, or early law in presentist environments and, on the other, to address the challenges of "legal pluralism" as an applied colonial practice. The mass of collected data showed that the transition from "primitive" to modern law was not unilineal. Complex legal configurations in which incommensurable legal systems overlapped in time and space were the rule endured within and across state boundaries. From colonialism to the postcolony At the turn to the twentieth century, new fields and methods of inquiry emerged with growing interest in what constituted law in daily social and economic interactions. A holistic approach was supposed to guarantee the study of all aspects of social life. Anthropological fieldwork entailed full immersion in a society to allow the researcher to see how law was used in daily practices. Societies with relatively loose forms of institutionalization and flat hierarchies were studied without interrogating the effects of colonial rule. Such research nevertheless helped understand how law could function in societies without specialized institutions for legislation and law enforcement, and to what extent social organization relied on reciprocity and on community as a public forum. That line of research embraced functionalist perspectives, and paid special attention to the social function of law and customs. Parallel to this development, as legal scholars began to engage in empirical research, emergent disciplinary boundaries between social and legal sciences were blurred. In the Netherlands, Van Vollenhoven (1909) argued that a deep understanding of customary law was essential for the Dutch colonial government. Misunderstanding its character led to illegal expropriation of land and other resources. His interest in the diversity of laws, their similarities and differences across the Malay Archipelago, was genuinely academic, notwithstanding his political motivation. The relations between local and colonial laws and the manner in which colonial courts determined how local laws were interpreted were key to his analyses (F. von Benda-Beckmann 2002, Benda-Beckmann and Benda-Beckmann 2011). Colonial administrative activity itself contributed to the complexity and diversity of plural legal configurations. Parts of local legal registers interacted with the legal order of the colonial state as they were acknowledged and codified by the colonial state. In this way, they became entangled with the dynamics within traditional and religious normativity. The result was that the borders between state and other-thanstate law were sometimes barely recognizable. Similar diversity would later be rediscovered both in the post-colonial and the industrialized colonizing states. The pioneers of this era, among them, anthropologists Bronislaw Malinowski and Richard Thurnwald, as well as Eugen Ehrlich, a legal scholar, built the foundation for the modern anthropological work on law through various trajectories. Primarily interested in law as an organizing principle of society that ensured social cohesion, they paid relatively little attention to conflicts and disputes. Pioneering the study of local laws in relation to the state and its law, however, Van Vollenhoven argued that adat (customary) law inevitably undergoes change when colonial courts and administrative institutions use it, which Anglo-American legal anthropology did not take into account until the 1970s. His analytical concepts underscored his criticism towards the colonial government, which systematically, and often intentionally, misrepresented the character of adat regimes and thereby violated its promise to fully recognize adat law. Careful academic analysis thus had profound political implications, for it showed that the government's expropriation of large tracts of land for economic development was largely illegal. That brought forth the insight that more knowledge of "unadulterated customary law" was needed, a trajectory that allowed legal anthropologists to step out of the shadow of the colonial state. Thus, the focus on law as organized in registers that inevitably share basic features and allow for a comparative analysis did not go unchallenged. Pioneers in the African colonial context, such as Max Gluckman and his Manchester School, put customary law center stage, advancing the notion that customary law was best understood through the study of disputes by means of the extended case method. However, the disadvantage of this very method was that they lost sight of the state. In fact, many authors interested in customary laws in this period show a remarkable lack of interest in the state. Eventually, a paradigm shift could be noted in the last phases of colonialism, roughly between the 1940s and the 1970s, when legal anthropologists took to conceptualizing customary law devoid of the state. The interest shifted towards the "pathological aspects of law," that is, disputes and processes of dispute resolution (Twining 2012, 123). This coincided with new theoretical views that underlined the importance of conflict for maintaining society (Nader 1965, 21). In the US, Llewellyn and Hoebel (lawyer and anthropologist respectively) jointly published a famous study of Cheyenne law in 1941 based on an analysis of case material. It was inspired by legal realists, who argued that regulations obtained their true meaning through interpretation in real situations, so that only court decisions could shed light on the precise content of a law (Darian-Smith 2013a, 62). Nevertheless, this line of research eventually brought legal pluralism back on the agenda as it paved the way from disputes as a source of law to law in practice. Two research strands could be distinguished in the use of the case method to study disputes, each with a different theoretical objective (Roberts 1979a). Anthropologists like Leopold Posp ı sil (1971) used disputes as a source from which unwritten law could be distilled. Others took a more process-oriented approach, studying disputes to understand processes of disputing behavior and decision-making (Comaroff and Roberts 1981). The interest pursued by the German and Dutch legal anthropologists was broader in scope. They argued that the significance of law derived from its use in the social, economic, and political life in general, and only secondarily from disputes, calling for the study of "trouble cases" and "trouble-less cases" (Holleman 1981;F. von Benda Beckmann 1985). In these studies, the state represented a part of the complex legal situation in which people operated. While for some anthropologists the guiding theoretical questions addressed how law was used in decision-making, others also studied how law affected practices of negotiation. The shift in focus from code to cases, and subsequently to contexts and settings, eventually brought the state back on the agenda and with it also legal pluralism. Dispute studies were thus instrumental in the 'discovery' of plural legal configurations within the modern nation state. Consolidating research on disputing processes In the 1970s, insights in disputing processes were further refined with field research in so-called developing countries and industrial societies. 3 At the height of the independence movements against the colonial regimes, when postcolonial and emerging states and their institutions became the subject of critical reflection, the nation state became the standard organizing principle of the political world order. This casted doubt on the portrayal of state involvement as negligible or nonexistent. 4 Critics argued that the state had been edited out, although state institutions, such as the police, were, in fact, much closer than some studies suggested. This had two important implications. Anthropologists, once again, began to pay attention to the relationships of local normative orders and that of the state. Besides, the notion of customary law itself was criticized. Sally Falk Moore (1978a) argued that state law often was mediated through the relationships and networks in what she called "semi-autonomous social fields." Social fields had internal rules and sanctions that interacted with the official state regulations that were filtered through the internal normative order. This publication became the starting point for a new anthropology of law in which the interrelations between state law, religious law, unofficial law, and customary law had moved center stage. American scholars who had previously conducted fieldwork solely outside the US shifted their research focus to the US. In order to gain insights in disputing processes in the US, 5 they used the lens of legal pluralism in colonial settings. This allowed them to identify "at home" mechanisms of "informal justice" in places outside of formal courts. Such research suggested that many disputes in the US are dealt with by way of other-than-state law, within many "rooms" in which justice is extendedor not, and that disputing parties are frequently engaged in "forum shopping," a term borrowed from private international law (Galanter 1974(Galanter , 1981. Systematic reflection on power differentials between parties explained why stronger parties tended to profit most from the judicial system. Legal anthropologists criticized the lack of mitigating mechanisms of power differentials in mediation. Felstiner, Abel, and Sarat (1981) showed that only a minute section of all potential grievances actually reach the courts and discussed the intervening factors along the trajectory. Resorting to court within a local community was often seen as an inappropriate response to conflict. Characteristic for this period in the study of law by anthropologists is the use of actor-oriented approaches. Later on, interpretative perspectives that paid closer attention to the discursive and power dimensions of law were used. They focused on the use actors made of the available laws and legal institutions in plural legal situations. Since economic, social, and symbolic power relations are differentially inscribed into law, choices from among legal systems have real consequences. Moreover, not just the disputing parties resort to forum shopping, but often also institutions dealing with the dispute, depending on how they stand to gain greater authority (K. von Benda-Beckmann 1981). In the ensuing years, the inventory of theoretical tools to study legal pluralism, also in its state-embeddedness, was expanded and its analysis further refined. In conflict processing, how problems are framed is largely determined by idiom shopping and code switchingchoosing from among the available legal idioms or discourses. Those concepts underscored the significance of the findings of interpretive anthropology, which understood legal registers as ways of seeing the world. Moreover, gender studies and feminist anthropology focused, in particular, on the socio-legal production of inequalities. The latter strand inspired anthropologists to study gender issues within the framework of family relations, for instance, in property regimes, or for ascertaining terms of access to natural resources under conditions of legal pluralism. Based on this literature, F. von Benda-Beckmann (2000, 153) demonstrated that the public-private distinctions, fundamental to western political theories and to legal doctrines of statehood, failed to appreciate that gendered attributes might be differently ascribed in other legal orders. The intellectual environment, within which legal pluralism emerged as a concept, and began to further develop, took inspiration from the sociological theory of Anthony Giddens on structuration and his constructivist perspective, from Bourdieu's contributions to praxeological theories, and from Foucault's notions of power and governmentality. Since the 1990s, governance, significantly, also alludes to normative activities such as law production by a wide range of public and private organizations, which include (I)NGOs and corporations. Established as a less controversial concept, but offering a complementary tool to legal pluralism, eventually, it also proved to be conducive to the study of interlinkages between legal pluralism and the state. Governance and legal pluralism have thus stood in a "symbiotic" relationship (Zips and Weilenmann 2011). Critique at "customary law" Increased emphasis in anthropological research on disputing in colonial settings led to a reconsideration of the character of customary law. Critical analysis showed that customary law, too, was pluralized and transformed over time by colonial state law and reinvented as neo-tradition. This critique revealed how deeply interwoven dispute analysis was with state normativity, be it the state as the leviathan against which informal conflict processing takes shape, or entailing the involvement of state officials in conflict processing outside the framework of state institutions, often also wearing the hat of local informal grass roots legal agents. Customary law allowed chiefs endowed with colonial authority, for instance, to enhance their power and stewardship over land at the expense of women's rights to land (Stewart 2003, 48;Hellum et al. 2007), a creative, still ongoing practice that entails combining state law with custom. 6 This explains the politics of law, and the entanglement of laws in plural legal orders, but it is not necessarily the full story. Critics focused too much on state institutions and political rhetoric, and failed to reflect on the broader range of contexts in which law is used and where other versions and interpretations might apply. To very different degrees, all colonial legal orders incorporated customary laws, thereby assigning to them specific, often truncated interpretations that co-existed and interacted with interpretations developed in other contexts. But most local laws are flexible, context-dependent, and constantly changing in response to state demands, such as economic development, democracy, or human rights, as well as to general social and economic developments. When shifting the gaze to actors that bring about the change, customary law appears not to be entirely made by the state and often is not even applied in state institutions. 7 The relationship between local law and state law alternates between rapprochement and distancing. Local communities may even actively choose to adopt state law, erasing almost all traces of legal pluralism. But local groups may also capture elements of state law and single these out from later renderings of state law. In the process, these elements become vernacularized and remain valid in this vernacularized form as local law. Legal pluralism studies established that the agency of persons connecting various legal orders was manifold or plural-legal. Zenker and Hoehne (2018), for instance, call attention to the paradox that, in their attempts to implement state law, state officials in Africa are obliged to deal with the logic of customs that is often incompatible with that of state law, in order to be able to do their work of the state. The translation work of street-level bureaucrats creates interpretations of law that are not always compatible with official interpretations. This represents another layer of legal pluralism. 8 The concept of legal pluralism Deeper insights into disputing processes showed for one, that people in colonial settings often had a choice to opt for one legal system over another, and secondly, that the state was not a passive onlooker but an active agent in the construction of multiple legal orders. Thus debate about legal anthropological concepts and categories to identify and spell out links between normative orders was spurred by a heightened interest in the state. 9 Legal anthropologists now needed an analytical framework to accommodate a conceptual inclusion of the state, its judiciary and legal institutions into their analyses of legal situations at local level. The legal sociologist Gurvitch (1935) first used legal pluralism to denote co-existing legal orders. But it was the Belgian lawyer Vanderlinden (1971) who first used the term in an analytical sense. 10 Legal pluralism, according to him, referred to a situation in which people could choose from among more than one co-existing set of rules. Legal plurality, by contrast, denoted the co-existence of multiple (sub-)legal systems within one state, to cater to different categories of persons who had no option to choose from among these bodies of law. For example, if commercial law was applicable for merchants, civil law was applicable for other citizens. The term legal pluralism initially met with considerable resistance and there were opposing views about what the term law signified. Over the years, many alternative terms were coined to deal with this discomfort (K. von Benda-Beckmann and Turner forthcoming). The nineteenth-century modernist notion of the nation state as the sole source of law dominated, whereby only state law and not normative orders deserved to be labeled as law, as the codified, differentiated, institutionalized and legitimized expression of the state sovereignty and monopoly of power. This understanding of law continued to be widely accepted by lawyers, economists, and social and political scientists throughout the twentieth century. They held that law would otherwise lose its distinctive meaning. Moore's publication on the semi-autonomous social field is often erroneously cited in favor of the term legal pluralism. In fact, she reserved the term law for state law (Moore 1973). Roberts (1979bRoberts ( , 1998 and Tamanaha (1993) also shared this opinion, but Tamanaha (2007) made an about-face later on and now considers any normative order to be law if the participants call it law. In this context, it is revealing that concepts that presupposed the existence of legal pluralism were warmly welcomed in academia and did not evoke comparable polemics, even if they conveyed the same message about the plurality of law. For the related concept of governance, for instance, it was generally accepted that state institutions are not the only institutions that produce law. Rather, there is pluralism in lawmaking beyond the purview of lawmakers who increasingly experience the power of organized non-state governance intervening in the state's legislative processes. Governance is key to the study of the relationship between the state and non-state lawmaking institutions, as this link sets the stage for broadly acknowledging the existence of something called global legal pluralism. John Griffiths' seminal article of 1986 in this journal that details his analysis of the trajectories of legal pluralism invited broad criticism. Unfortunately, the article was incorrectly interpreted as a value judgment that positioned legal pluralism against the state. Griffiths' polemic was meant to show lawyers that a state-centric view of law eclipsed the significance of other kinds of law being used in social interactions; it did not entail a value judgment. Empirical data on plural legal circumstances provide neither a positive nor negative content assessment of the respective legal regimes (see, e.g. Sharafi 2008; Zips and Weilenmann 2011). The critics of legal pluralism argued that this article represented the eternal bible of all adherents of a moral anti-state tenet overlooking the considerable diversity among scholars studying legal pluralism. The objective was rather to address the incompatibility between state legal dogmatism and empirical challenges, for the ideology does not need an empirical foundation. In this period, two main conceptions of legal pluralism proliferated, especially in debates on the relationship between legal pluralism and the state. Even if scholars who considered law as standing for state law did not necessarily reject the notion of legal pluralism, they accommodated legal plurality only if and to the extent the state legal system recognized other forms of law. Legal pluralism also concerns the various degrees of plurality. Concepts of legal pluralism that put the state and its domestic law center stage in plural legal configurations, are labeled as "state," "weak," "juristic," "classic," (e.g. A. Griffiths 1986; see also, e.g. J. Griffiths 2002;Sezgin 2004;F. von Benda-Beckmann 1997), "relative" (Vanderlinden 1989), "lawyer's" (e.g. Benda-Beckmann 2002, 25), or "legally constructed" (e.g. K. von Benda-Beckmann 2001a, 24). They vary in how much power and sovereignty are ascribed to the state and to what extent interactions can be allocated to the various normative registers within such configuration. These positions assert that law must be recognized by the state, that the scope of "existence" of other-than-state law is defined by the state legal system, and that incorporation of plural legal components into the state system is possible to be achieved. Or more concretely, legal pluralism is understood here as deriving from the recognition of one legal system by another legal systemusually that of the nation state. Keebet von Benda-Beckmann (2001b; see also, Anders 2004) calls this a legal political concept of legal pluralism that has developed into what scholars interested in law at the transnational and global level today understand as "normative legal pluralism." However, other scholars considered a state-centric position inconsistent, because most proponents of this view acknowledged the existence of religious law as law, despite the fact that it was not enacted by the state. It was therefore not appropriate for the social scientific study of law that aimed at understanding the social working of law (F. von Benda-Beckmann 1979). The second strand places the formal legal system principally on a more or less equal footing with all or some of the other legal orders constituting a plural legal constellation. Here the relationship is qualified, for instance, as "deep," "strong," "real" (J. Griffiths 1986) or as "factual" legal pluralism (Angelo 1996, 1). 11 An implicit or explicit agency of diverse legal regimes (customary, religious) is often assumed. The term "co-existence" then translates into an arrangement of normative orders, each with its own legitimacy and validity. Here the existence of law irrespective of what the state declares to be law is emphasized. People may refer to a normative register even if it is not recognized by the state. A focus on the spatial and scalar arrangement of legal pluralism reveals that components from "above" and "below" state law may take effect inside a state territory and interact with state law in various ways. They generate overlapping spaces of authority, and invite dogmatic analyses of legal content and interpretation of concrete norms within the legal system. Legal actors "on the ground" quite often reach different conclusions about these forms of territorial inclusion of domestic law in plural legal arrangements from those of state institutions. No coherent system can be formed as a result; instead only a patchwork of normative components is translated into the legal landscape (Anders 2004). As Santos (2002, 95) has put it, this gives rise to a condition of "internal legal pluralism" where it is possible for "different logics of regulation carried out by different state institutions with very little communication between them" to co-exist. Those who advocate a broad understanding of law emphasized that the term "law" references a number of categorically different domains. It may signify a science, an ideology, a technology or a craft, or a cognitive concept as a way of imagining the real (Geertz 1983). Moore (1978b) pointed to the great variety of social processes in which law is involved. Interpretation, confirmation, validation and reproduction occur especially in formalized situations (e.g. tribunals, notary, and administrative decisions), in the media, education and academia, at the work place, or in informal communication; social practices generate standardization of action as do other forms of routinization of social practices. As these routinized and standardized practices may eventually translate into normativity, law must be viewed as one of the basic domains of practice of the human existence. As a social institution, it is comparable to religion, political or economic practice. This requires reflecting on law on a higher level of abstraction than dogmatic law with its exclusive focus on the state allows for. It requires abstracting comparative and analytical concepts from the specificoften westernmanifestations from which they derive, an operation that resembles how kinship and religion have become analytical terms. Much of the confusion and controversy in the debate results from a lack of refinement and of the specificity of the dimensions in which legal orders differ from each other. Legal orders differ on several dimensions: degree of regulation, institutionalization, differentiation, systematization, modes of sanctioning, spatial and social scope of validity, and basis of legitimacy. In addition, they can also differ in terms of the number of concrete norms and principles; they can be codified, written, or oral. 12 Comparisons along these diverse lines have shown that there are important commonalities between state law on the one hand and customary law and other normative orders on the other, even if other differences are significant. They have also disproved the claim that state law is by definition more important. Legal pluralism in the anthropological sense therefore is a sensitizing concept for situations in which people draw upon several legal systems, irrespective of their status within the state legal system. It endorses anthropological findings indicating that, in their social and economic interactions, people resort to customary, religious law or an unnamed new law, often mixed with parts of state law, even when the state explicitly denies the validity of these other kinds of law. Moreover, normatively defined legal pluralisms abound (Benda-Beckmann and Benda-Beckmann 2006, 26). All legal systems embody ways of dealing with other legal systems. In Islamic law, for instance, elaborate regulations have been put in place to recognize customary law. According to this view, state recognition of other legal ordersor the lack of itis a significant indication about what the normative relationship will be. But that does not fully capture the range of laws that people actually employ in social interaction. For this, a broad empirical and comparative concept is necessary that calls attention to the possibility that more than one legal system could be relevant for social interaction, without claiming that this is necessarily the case always and everywhere. This view of legal pluralism has been extremely useful for understanding that constellations of legal pluralism differ widely in scope and that the relative importance of their components varies. It has served to study modes of governance and the ways in which power relations are inscribed into law, and to understand how law regulates access to resources and justiceand the lack of it. Situating the state its law, this perspective lays out the specific contexts and ways in which normative orders are invoked, interpreted, and put into practice, and shows the dynamics of how law is both maintained and modified along the chains of interaction, settings and contexts. Alternative concepts, such as polycentrism, legalities and interlegality, parallel legal orders, nomosphere, hybridity, vernacularization, iterations, law fare, legal diversity, and call attention to specific aspects of legal pluralism. Global dynamics The end of the twentieth century saw a rise in globalizing economies. This was characterized by mass migration, innovations in transportation and communication technologies, growing significance of international finance, and the proliferation of secular and religious transnational organizations. These dynamics affected various social, political and economic fields and evoked critical tensions in legal environments between homogenization and hyperspecialization. Global supply chains, natural resource management and development cooperation, for instance, were increasingly dominated by the logics of neoliberal normativity. Global governance institutions increasingly claimed legislative powers on global scale, adding new dimensions to legal choice making. These developments have generated a wide array of theoretical work on the role and character of legal pluralism. The most salient issues concerning the relationship of state and legal pluralism shall be discussed below. The burgeoning of new forms of governance has compelled social scientists to theorize the role and character of the expanding field of transnational organizations, and thus also the scope of epistemic communities that are in a crucial position to influence the framing of issues to be regulated. Their norm-setting activities often materialize not so much in explicit rulemaking as in contracts and in standardizing procedures that define what constitutes admissible evidence. Unequal relations among epistemic communities, such as among lawyers, technical experts, and economists, have given rise to legal complexity. Legal pluralism studies have addressed the conundrum of unequal power relations for actors defending their rights in fields dominated by epistemic communities (e.g. Wiber 2005). This required, on the one hand, reconsidering the concept of law in light of the fact that the nation-state had ceased to be the main source of law; the position of states and the concept of governance had to be reconsidered (Reyntjens 2015). Theories of relationality, such as actor-network and assemblage theory, have produced new perspectives in the social sciences and in socio-legal studies on actors involved in lawmaking. With a global legal environment replete with lawmaking bodies, the sovereign state is no longer the sole legislative forum. State law is viewed no less as a bottleneck for global flows of transnational law in attempts to link above-the-state and below-thestate legal pluralisms (Helfand 2015, 5). In other words, where lawmaking processes are pluralized, to a great extent, nation states no longer remain pivotal to law, nor do they represent the sole legitimate source of lawmaking in every single social and economic field. These conditions have finally convinced legal scholars and social scientists to adopt the concept of legal pluralism without relinquishing the significance of the state (see, e.g. Berman 2014Berman , 2016Croce and Goldoni 2015;Michaels 2009Michaels , 2013Twining 2009). The proliferation of "particularized normative orders" (Darian-Smith 2013a, 37) also sparked an interest in the time, space, and scalar dimensions of legal pluralism. Postfoundational and critical social theories and the theoretical work on transboundary communities has stimulated research on laws conveyed to such new scalar arrangements, and the changes these undergo in the new socio-economic and legal contexts. Similarly, such research is confronted with the challenge of viewing the state from a global perspective, as also embodying a diversity of people, of religious expressions and citizens and migrants desirous of or requiring a high degree of mobility. Religious law and doctrine crossing national boundaries played a pioneering role in the formulation of an emerging concept of global legal pluralism. More than the study of custom, it is the increasingly contested sovereignties of domestic and religious law that has moved legal pluralism closer to the state in liberal democracies (Turner and Kirsch 2009). While many predicted the end of the nation state, migration studies have showed that nation states were far from fading in significance, and in crucial ways affected the life of migrants (Darian-Smith 2013a, 37). On a global scale, such dynamics translated into new theoretical deliberations. Globalizing processes required looking afresh at asymmetrical power relationships entailed in law (Croce and Goldoni 2015). Postcolonial and subaltern theories as well as critical approaches to legal orientalism focused on the enduring power differentials after decolonization and helped to deconstruct the underlying conviction on the supremacy of Western law (Darian-Smith 2013b; Baxi 2000). Not surprisingly, the politics of global legal pluralism shows to be deeply involved in neoliberal projects. They include decentralization, for instance, when powerful actors manipulate legal registers of vulnerable groups to weaken the validity of claimants' rights. Other projects concern democratization, for instance, when the rights of non-majoritarians are acknowledged. Yet others deal with free trade, where goods and money can freely move but not people. "Lawfare," once an instrument of colonial oppression, is now the "weapon of the weak" to claim resources, recognition, and voice (Comaroff and Comaroff 2009, 37). Claims to recognition of grassroots law found also expression in constitutional legal pluralism, especially in Latin American countries (Hoekema 2017). Paradoxically, one result of such radical legal thinking may be that indigenous cosmovisional law may be applied in cases where possibly it is the judge and not the protagonists who considers indigenous legal reflections the most appropriate to adopt. The expansion of development cooperation led to a flourishing legal development industry featuring a host of "law merchants." Transnational legal templates are traded around the globe to promote the rule of law and to assist constitution making in emerging or vulnerable states to ensure compliance with the requirements of transnational extraction schemes (Grenfell 2013;Seidel 2017). Such projects often involve a neo-codification of local forms of normativity to fit the constitutional requirements similar to what was practiced during colonial times. The problem of legal complexity is exacerbated by the fact that development agencies often propose new laws on the erroneous assumption that they fill a legal void. They also have to navigate between the legal environments of the donor and the recipient states. The resulting "project law" compounds the plural legal order that forms the environment for the recipients of development cooperation. Legal models travel around the world at an unprecedented pace through a great variety of channels, pluralizing constellations of legal pluralism at the intersection of various scales (Behrends et al. 2014). This entails important translation processes. Some transnational legal models need to be downscaled to the level of the nation state and below. Human rights, for example, will only be understood and taken up in practice if they are successfully translated into local discourses and politics (Corradi et al. 2017). Moreover, global standard setting tools, such as the UN Declaration on Human Rights, may gradually change their meaning and significance and assume a new shape within global assemblages of which this hybrid law is but oneinternally fragmentedcomponent. It is not just the vernacularization of human rights on the ground that adds to legal pluralism; it is more its neoliberal exegesis that takes effect in global legal pluralism (Goodale 2009). Such neo-standardization endows global law with new meaning but also re-pluralizes it at a national scale. Strong tendencies toward a re-nationalization of the nomosphere, in domains such as the legal regulation of migration and of global trade, render global legal pluralism even more complex. Other legal models must undergo upscaling, for example, when indigenous rights acquire a generic meaning within a national legal order or across boundaries in international law. Studies of such translating processes have called attention to the role of the intermediating actors and their relations with the recipients and addressees of the new laws. As Turner (2015) has argued, in order to capture the interconnections of plural legal constellations and the differences in the scope of the components, multisited and multi-scalar studies appear to be better suited than methodological nationalism, which presumes the nation state to be the sole "natural" socio-political and legal unit of reference (Wimmer and Schiller 2003;Sassen 2010). While social sciences developed an empirically grounded concept of global legal pluralism, legal studies suggested a postulated normative concept of legal pluralism (Berman 2014). According to Berman (2014;2016), such normative pluralism may pursue either a substantive strategy accommodating diversity, or a proceduralist one that seeks to manage pluralism under condition of both the fragmented landscape of legal sovereignties and the project of global legal harmonization. There, non-state law acknowledges an emerging world legal order that while indicative of the growing significance of international law does not imply the diminishing importance of domestic law. By invoking the term non-state law, these scholars put the interaction and degrees of the mutually constituting dynamics between the spheres of state and nonstate center stage in research and analysis (Hertogh 2008;Dedek and Praagh 2015;Helfand 2015;K€ otter et al. 2015). 13 Scholars of Science and Technology Studies have shown that we are only beginning to understand the ontological and epistemological challenges posed to the actors involved (Cadena and Lien 2015). Methodologically, the research focus within legal anthropology already shifted the units of analysis from codes to (extended) cases and events, situations and contexts. Postfoundational epistemological interest turned its gaze to legal practice and situated knowledge (Davies 2017). In this line of thought, the unit of research eventually came to be understood as a complex web of relations including human and non-human, discursive and other constitutive elements such as knowledge regimes (e.g. McGee 2014; Pieraccini 2016; Robinson and Graham 2018). Conclusions We have shown that anthropological analyses of plural legal orders have alternated between moving towards state law and away from it. This occurred in engagement with successive social and legal theories and as a result of perceived socio-economic and political changes. Evolution theory constructed a unilineal development from "primitive" to modern state law. Structural functionalist theories were so preoccupied with finding the internal working of the laws they found in the colonies that the state receded to the background. These studies were based on the theory that law created order. Anglo-American legal doctrine in the mid-twentieth century, which focused on case law, the American school of legal realism, and a shift in anthropology to the study of conflict, narrowed down the study of law to disputes. In European legal anthropology, the social working of law in the interaction of different legal systems was at the core of research. Legal pluralism was developed as an analytic tool for that purpose. The term was criticized on the basis of a modernist view of law. From a post-colonial perspective, the concept of customary law was also criticized for being an invention of the colonial state, though customary law was not purely a state invention. In the late twentieth century, actor-oriented theories, social constructivism, interpretative theories, relational approaches, and network theories diverted attention to the contexts in which law was deployed in social interaction. This again brought the state into sharper relief. Globalization theories at the turn of the twenty-first century have spawned interest not only in transnational networks and production chains but also in legal transfers and the translation processes at different scale. Skepticism towards the concept of legal pluralism vanished in general. Multi-scalar and multisited studies repositioned the state as one of the sources of law, amongst many, at both the transnational and sub-national levels. Analyzing the co-transformative processes to which the concept of the state itself was subjected in these developments would be beyond the purview of this article. Suffice it to say that the concept of the state today is a far cry from the twentieth century ideal of a sovereign nation state. An epistemological insight provided through the use of the analytical concept of legal pluralism is that any sort of plural legal configuration eventually engages or is entangled with statehood, whether it is about co-opting, bypassing, neglecting, accommodating, or merging. Eventually, the acceptance of global legal pluralism has resuscitated the significance of the state in its present fragmented and dependent guise in complex plural legal assemblages. Notes 1. We discuss the literature selectively and cannot do justice to each individual author's view on the relationship between state law and legal pluralism in relation to the social
2019-05-13T13:04:02.406Z
2018-09-02T00:00:00.000
{ "year": 2018, "sha1": "9863a4c01475159ee8ce4b42c6070df0f2b4b405", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07329113.2018.1532674?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "e6758509268f5ffe7c6ddbcbd3fc1c844748aa98", "s2fieldsofstudy": [ "Law", "Political Science", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
18447279
pes2o/s2orc
v3-fos-license
Three sides of the geometric Langlands correspondence for gl_N Gaudin model and Bethe vector averaging maps We consider the gl_N Gaudin model of a tensor power of the standard vector representation. The geometric Langlands correspondence in the Gaudin model relates the Bethe algebra of the commuting Gaudin Hamiltonians and the algebra of functions on a suitable space of N-th order differential operators. In this paper we introduce a third side of the correspondence: the algebra of functions on the critical set of a master function. We construct isomorphisms of the third algebra and the first two. A new object is the Bethe vector averaging maps. Introduction We consider the gl N Gaudin model associated with a tensor power of the standard vector representation. The geometric Langlands correspondence identifies the Bethe algebra of the commuting Gaudin Hamiltonians and the algebra of functions on a suitable space of N-th order differential operators. In this paper we introduce a third ingredient of the correspondence: the algebra of functions on the critical set of a master function. We construct isomorphisms of the three algebras. Master functions were introduced in [SV] to construct hypergeometric integral solutions of the KZ equations, κ ∂I ∂z i = H i (z)I(z), i = 1, . . . , n , where H i (z) are the Gaudin Hamiltonians, Φ(z, t) is a scalar master functions, ω(z, t) is a universal weight function, which is a vector valued function. It was realized almost immediately [Ba, RV] that the value of the universal weight function at a critical point of the master function is an eigenvector of the Gaudin Hamiltonians. This construction of the eigenvectors is called the Bethe ansatz. The critical point equations for the master function are called the Bethe ansatz equations and the eigenvectors are called the Bethe vectors. The Bethe ansatz gives a relation between the critical points of the master function and the algebra generated by Gaudin Hamiltonians. The algebra of all (in particular, generalized) Gaudin Hamiltonians is called the Bethe algebra. Higher Gaudin Hamiltonians were introduced using different approaches in [FFR] and [T], see also [MTV1]. In [ScV,MV1], an N-th order differential operator was assigned to every critical point of the master function. The differential operators appearing in that construction form the second component of the geometric Langlands correspondence. The third component of the geometric Langlands correspondence is the algebra of functions on the critical set of the master function. In this paper we show that all three components of the geometric Langlands correspondence are on equal footing; they are isomorphic. The main results of the paper are Corollaries 5.4 and 8.6. The paper is organized as follows. In Section 2 we recall the definition of the Bethe algebra B V of a tensor power of the vector representation of gl N [MTV2]. In Section 3 we introduce the algebra O W of functions on a suitable Schubert cell W. Points of W are some N-dimensional spaces of polynomials in one variable. Such a space X is characterized by a monic N-th order differential operator with kernel X. The algebra O W can be considered as the algebra of functions on the space of those differential operators. In Section 4 we recall an isomorphism ζ : O W → B V constructed in [MTV2]. In Section 5 a master function and its quotient critical set C are introduced and an isomorphism ι * : O W → O C is constructed. Here O C is the algebra of functions on C. Consequently, we obtain a composition isomorphism In Section 6 we introduce the universal weight function ω(z, t) and describe the basic facts of the Bethe ansatz. In Section 7 the Bethe vector averaging maps v F : z → 1 l 1 ! . . . l N −1 ! (z,p)∈Cz F (z, p) ω(z, p) Hess t log Φ(z, p) are introduced. Here Φ(z, t) is the master function, C z the critical set of the function Φ(z, · ), ω(z, t) the Bethe vector, F (z, t) an auxiliary polynomial function. Theorem 7.1 says that the Bethe vector averaging maps are polynomial maps. This is the main technical result of the paper. Using the Bethe vector averaging maps, we construct in Section 8 a new (direct) isomorphism ν : O C → B V . We prove that the throughout composition is the identity map. Section 9 contains the proof of Theorem 7.1. The paper discusses one example: the Gaudin model on a tensor power of the vector representation of gl N . But the picture presented here presumably holds for more general representations and more general Lie algebras. All the ingredients of our considerations (the Bethe algebras, master functions, Bethe vector averaging maps) are available in other situations. The authors thank A. Gabrielov for helpful discussions. 2. Bethe algebra B λ 2.1. Lie algebra gl N . Let e ij , i, j = 1, . . . , N, be the standard generators of the Lie algebra gl N satisfying the relations [e ij , e sk ] = δ js e ik − δ ik e sj . Let h ⊂ gl N be the Cartan subalgebra generated by e ii , i = 1, . . . , N. Let M be a gl N -module. A vector v ∈ M has weight λ = (λ 1 , . . . , λ N ) ∈ C N if e ii v = λ i v for i = 1, . . . , N. A vector v is singular if e ij v = 0 for 1 i < j N. Denote by (M) λ the subspace of M of weight λ, by (M) sing the subspace of all singular vectors in M, and by (M) sing λ the subspace of all singular vectors of weight λ. Denote by L λ the irreducible finite-dimensional gl N -module with highest weight λ. The gl N -module L (1,0,...,0) is the standard N-dimensional vector representation of gl N , denoted below by V . We choose a highest weight vector of V and denote it by v + . The Shapovalov form on V is the unique symmetric bilinear form S defined by the conditions S(v + , v + ) = 1, S(e ij u, v) = S(u, e ji v), for all u, v ∈ V and 1 i, j N. For a natural number n, the tensor Shapovalov form on V ⊗n is the tensor product of the Shapovalov forms of factors. be the complex Lie algebra of gl Nvalued polynomials with the pointwise commutator. We identify gl N with the subalgebra gl N ⊗1 of constant polynomials in gl N [t]. Hence, any gl N [t]-module has a canonical structure of a gl N -module. We have the evaluation homomorphism, ev : gl N [t] → gl N , ev : g(u) → g u −1 . Its restriction to the subalgebra gl N ⊂ gl N [t] is the identity map. For any gl N -module M, we denote by the same letter the gl N [t]-module, obtained by pulling M back through the evaluation homomorphism. There is a Z 0 -grading on gl N [t]: for any g ∈ gl N , we have deg g ⊗ t r = r. 2.3. The gl N [t]-module V S . Let n be a positive integer. Let V be the space of polynomials in z 1 , . . . , z n with coefficients in V ⊗n , V = V ⊗n ⊗ C C[z 1 , . . . , z n ]. For v ∈ V ⊗n and p(z 1 , . . . , z n ) ∈ C[z 1 , . . . , z n ], we write p(z 1 , . . . , z n ) v instead of v ⊗ p(z 1 , . . . , z n ). The symmetric group S n acts on V by permutations of the factors of V ⊗n and the variables z 1 , . . . , z n simultaneously, Denote by V S the subspace of S n -invariants of V. The space V S is a free C[z 1 , . . . , z n ] S -module of rank N n , see [CP], cf. [MTV2]. The space V is a gl N [t]-module with a series g(u), g ∈ gl N , acting by The gl N [t]-action on V commutes with the S n -action. Hence, V S ⊂ V is a gl N [t]-submodule. Define a grading on C[z 1 , . . . , z n ] by setting deg z i = 1 for all i. Define a grading on V by setting deg(v ⊗ p) = deg p for any v ∈ V ⊗n and p ∈ C[z 1 , . . . , z n ]. The grading on V induces a grading on V S and End (V S ). The gl N [t]-action on V S is graded, [CP]. 2.4. Bethe algebra. Given an N × N matrix A = (a ij ), we define its row determinant to be Let ∂ be the operator of differentiation in the variable u. Define the universal differential operator D by the formula It is a differential operator in u, whose coefficients are formal power series in u −1 with coefficients in U(gl N [t]), and B ij ∈ U(gl N [t]), i = 1, . . . , N, j ∈ Z i . The unital subalgebra of U(gl N [t]) generated by B ij , i = 1, . . . , N, j ∈ Z 0 , is called the Bethe algebra and denoted by B. By [T], cf. [MTV1], the algebra B is commutative, and B commutes with the subalgebra is a scalar series. The scalar differential operator will be called the differential operator associated with an eigenvector v. [MTV2]. Denote by B V the Bethe algebra of (V S ) sing λ . The Bethe algebra B V is our first main object. Algebra of functions Given a partition λ = (λ 1 , . . . , λ N ) with λ 1 d − N, introduce a sequence Denote by W the subset of Gr(N, d) consisting of all N-dimensional subspaces X ⊂ C d [u] such that for every i = 1, . . . , N, the subspace X contains a polynomial of degree d i . In other words, W consists of subspaces X ⊂ C d [u] with a basis f 1 (u), . . . , f N (u) of the form For a given X ∈ W, such a basis is unique. The basis f 1 (u), . . . , f N (u) will be called the flag basis of X. The set W is a (Schubert) cell isomorphic to an affine space of dimension |λ| with coordinate functions f ij . Let O W be the algebra of regular functions on W , We may regard the polynomials f i (u), i = 1, . . . , N, as generating functions for the generators see [MTV2]. 3.2. New generators of O W . For g 1 , . . . , g N ∈ C[u], introduce the Wronskian where an i-th row is formed by derivatives of g i . Let f i (u), i = 1, . . . , N, be the generating functions in (3.1). We have where n = |λ| and A 1 , . . . , A n are elements of O W . Define We have and B W ij ∈ O W , i = 1, . . . , N, j ∈ Z i . For any (i, j), the element B W ij is homogeneous of degree j − i. For any i the series B W i (u) is homogeneous of degree −i. The elements B W ij ∈ O W , i = 1, . . . , N, j ∈ Z i , generate the algebra O W , see [MTV2]. 3.2.1. For X ∈ W, denote by D X the monic scalar differential operator of order N with kernel X. We call D X the differential operator associated with X. The operator D X is obtained from D W by specialization of variables f ij to their values at X. 3.3. Wronski map. Let X ∈ W. The Wronskian determinant of a basis of the subspace X does not depend on the choice of the basis up to multiplication by a number. The monic polynomial representing the Wronskian determinant of a basis of X is called the Wronskian of X and denoted by Wr X (u). The Wronski map W → C n sends a point X ∈ W to a point a = (a 1 , . . . , a n ), if Wr X (u) = u n + n s=1 (−1) s a s u n−s . The Wronski map has finite degree. The degrees of elements of (V S ) sing λ are not less than N i=1 (i − 1)λ i and the homogeneous component of (V S ) sing Theorem 4.2 ( [MTV2]). The map The maps ζ and η intertwine the action of the multiplication operators on O W and the action of the Bethe algebra B V on (V S ) sing λ , that is, for any F, G ∈ O W , we have (4.1) η(F G) = ζ(F ) η(G) . 5. Critical points of the master function is called a master function. The master functions arise in the hypergeometric solutions of the KZ equations, see [M,SV,V1] and in the Bethe ansatz method for the Gaudin model, see [Ba, RV]. The product of symmetric groups S l = S l 0 × · · · × S l N−1 acts on the coordinates T by permutations of the coordinates with the same upper index. The master function is S l -invariant. We consider the master function as a function of t depending on the parameters t (0) . That is, a point T is a critical point if the following system of l − n equations is satisfied: here a = 1, . . . , N − 1, j = 1, . . . , l a . In this definition we assume that all the denominators in (5.2) are nonzero. In the Gaudin model, equations (5.2) are called the Bethe ansatz equations. For a point T ∈ C l , denote where we take the determinant of the (l − n) × (l − n) matrix of second derivatives of the function log Φ with respect to all of the variables t (a) i with a > 0. Theorem 5.1 ( [ScV,MV2]). For generic t 0 ∈ C n , all critical points of the function log Φ( t 0 , ·) are nondegenerate. The number of the S l 1 × · · · × S l N−1 -orbits of critical points equals dim (V ⊗n ) sing λ . Denote by C ⊂ C l T the union of all critical points of the functions log Φ( t 0 , ·) for all t 0 ∈ C n with distinct coordinates t The set C will be called the quotient critical set of the master function. Let O C be the algebra of regular functions on C, that is, the restriction of C 2) are homogeneous. Hence, C is a quasi-homogeneous algebraic set and the algebra O C has a grading with deg (σ . . , f N,X (u) be the flag basis of X. Introduce the polynomials y 0,X (u) , y 1,X (u) , . . . , y N −1,X (u) by the formula For each a, the polynomial y a,X (u) is a monic polynomial of degree l a , y a,X (u) = u la + la,X the roots of y a,X (u). Then σ will be called the root coordinates of X. For every a the numbers t la,X are determined up to a permutation. Let Σ X be the image of T X in C l Σ . A point X ∈ W will be called nice if all roots of the polynomials y 0,X (u) , y 1,X (u) , . . . , y N −1,X (u) are simple and for each a = 1, . . . , N − 1, the polynomials y a−1,X (u) and y a,X (u) do not have common roots. Nice points form a Zariski open subset of W, see [MTV3]. If X is nice, then the root coordinates T X satisfy the critical point equations (5.2), see [MV1]. Proof. The lemma follows from the fact that the nice points of W are mapped to C. Differential operator D T and a map , then the kernel X T of D T consists of polynomials; moreover, X T is a point of W, see [MV1]. The correspondence T → X T defines a rational map ι : C → W. 5.5. Quotient critical set is a nonsingular subvariety. Proof. The map θ, considered as a map from W to θ(W) is finite. The set θ(W) is Zariski closed since W is Zariski closed. We know from [MV1] that θ(W) contains the subset C ⊂ C, the image of nondegenerate critical points. We have θ(W) = C, since C is the Zariski closure of C and θ(W) is Zariski closed. The fact that ιθ = id W at generic points of W is proved in [MV1]. Therefore, ιθ = id W for all points of W. Consider the algebra homomorphism ι * : is an embedding and the map ι : C → W is an isomorphism. Universal weight function and Bethe vectors We remind a construction of a rational map ω : C l → (V ⊗n ) λ , called the universal weight function, see [M, SV], cf. [RSV]. A basis of V ⊗n is formed by the vectors e J v = e j 1 ,1 v + ⊗· · ·⊗e jn,1 v + , where J = (j 1 , . . . , j n ) and 1 j a N for a = 1, . . . , N. A basis of (V ⊗n ) λ is formed by the vectors e J v such that #{a | j a > i} = l i for every i = 1, . . . , N − 1. Such a multi-index J will be called admissible. The universal weight function has the form ω(T ) = J ω J (T )e J v where the sum is over the set of all admissible J, and the functions ω J (T ) are defined below. Define z,p) and D (z,p) are the differential operators associated with the eigenvector ω(z, p) and the point (z, p) ∈ C l , respectively, see Sections 2.4.1 and 5.4. (iii) Let S be the tensor Shapovalov form on V ⊗n , then S(ω(z, p), ω(z, p)) = Hess t log Φ(z, p) . The vector ω(z, p) is called the Bethe vector corresponding to a critical point (z, p). Let z be a generic point of C n with distinct coordinates and such that all the critical points of the function log Φ( z , ·) are nondegenerate. The critical set C z of log Φ( z , ·) consists of dim (V ⊗n ) sing λ S l 1 × · · · × S l N−1 -orbits. Each orbit has l 1 ! · · · l N −1 ! points. For any The term of this sum corresponding to a critical point (z, p) can be written as the following integral, see Chapter 5 of [GH]. Choose a small neighborhood U of p in C l−n . Define a torus Γ z,p in U by l − n equations |Φ a j (z, t)| = ǫ a j where Φ a j are derivatives of log Φ( z , ·) with respect to the variables t (a) j , a > 0, and where ǫ a i are small positive numbers. Then F (z, p) ω(z, p) The l 1 ! · · · l N −1 ! terms of the sum in (7.1) corresponding to a single S l 1 × · · · × S l N−1 -orbit are all equal due to the S l 1 × · · · × S l N−1 -invariance of Φ, ω and F . The correspondence z → v F (z) defines a map v F : C n → (V ⊗n ) sing λ which will be called a Bethe vector averaging map. The map v F is a rational map. Indeed, the map is well defined on a Zarisky open subset of C n and has bounded growth as the argument approached the possible singular points or infinity. Theorem 7.1. For any F ∈ C[Σ], the Bethe vector averaging map v F is a polynomial map. Theorem 7.1 is proved in Section 9. 8. Quotient critical set and Bethe algebra We shall denote this isomorphism by the same letter µ. Proof. If F ∈ I C , then v F = 0 for generic z. Hence, v F = 0 as an element of (V S ) sing λ . If v F = 0 as an element of (V S ) sing λ , then F = 0 on a Zariski open subset of C. Hence, F ∈ I C . Therefore, ker µ = I C . The graded character of O C equals the graded character of O W by Corollary 5.4. The graded character of O W is given by (3.2). The graded character of (V S ) sing λ is given by (2.2). Comparing the characters and using Lemma 8.1, we conclude that the induced map µ : O C → (V S ) sing λ is an isomorphism. ij . Proof. The lemma follows from part (ii) of Theorem 6.1. Corollary 8.6. The maps µ : O C → (V S ) sing λ and ν : O C → B V intertwine the action of the multiplication operators on O C and the action of the Bethe algebra B V on (V S ) sing λ , that is, for any F, G ∈ O C , we have Corollary 8.7. Consider the element v 1 ∈ (V S ) sing λ , corresponding to F = 1 under the isomorphism µ. Let us use this element in the definition of the isomorphism η of Theorem 4.2. Then the throughout compositions Zariski open subset of C as follows. For a generic point Σ ∈ C, let T = (z, t) be a point of the critical set C ⊂ C l which projects to Σ. Let ω(z, t) be the Bethe vector corresponding the point (z, t). Set where S is the tensor Shapovalov form on V ⊗n , cf. Theorem 6.1. Theorem 8.8. For any v ∈ (V S ) sing λ , the scalar function f v is the restriction to C of a polynomial. Moreover, the map (V S ) sing Proof. Any element of (V S ) sing λ has the form of v F for a suitable F ∈ C[Σ], see (7.1). In that case, by Theorem 6.1. This identity proves the theorem. 9. Proof of Theorem 7.1 9.1. The Shapovalov form and asymptotics of v F . Let T 0 be a point of the critical set C ⊂ C l , see Section 5.1. Consider the germ at 0 ∈ C of a generic analytic curve C → C l , s → T (s) = (z(s), t(s)), with T (0) = T 0 such that for any small nonzero s, the point (z(s), t(s)) is a nondegenerate critical point of log Φ(z(s), · ), and z(s) has distinct coordinates. The corresponding Bethe vector has the form, ω(T (s)) = w α s α + o(s α ), where α is a rational number and w α ∈ (V ⊗n ) sing λ is a nonzero vector. Let X 0 denote the point of W corresponding to T 0 . Namely, we take the image Σ 0 of T 0 in C under the factorization by the S l -action and then set X 0 = ι(Σ 0 ). Lemma 9.1. Assume that X 0 is not a critical point of the Wronski map W → C n . Then S(w α , w α ) is a nonzero number, where S is the tensor Shapovalov form. Proof. For a small nonzero s, the Bethe vectors corresponding to S l 1 × · · · × S l N−1 -orbits of the critical points of log Φ(z(s), · ) form a basis of (V ⊗n ) sing λ , see [MV1]. That basis is orthogonal with respect to the Shapovalov form. The Shapovalov form is nondegenerate on (V ⊗n ) sing λ . By assumptions of the lemma, the limit of the direction of the Bethe vector ω(z(s), t(s)) as s → 0 is different from the limits of the directions of the other Bethe vectors of the basis. These remarks imply the lemma. Proof. We have Hess t log Φ(T (s)) = S(ω(T (s)), ω(T (s))) = s −2α S(ω α , ω α ) + o(s −2α ), so the ratio ω(T (s))/Hess t log Φ(T (s)) has order s −α as s → 0. 9.2. Possible places of irregularity of v F . To prove Theorem 7.1, we need to show that v F is regular outside of at most a codimension-two algebraic subset of C n . There are three possible codimension-one irregularity places of v F : (9.1) A pole of v F may occur at a place where z has equal coordinates. (9.2) A pole of v F may occur at a place where z has distinct coordinates and the function log Φ(z, · ) has a degenerate critical point. (9.3) A pole of v F may occur at a place where z has distinct coordinates and there is a critical point which moved to a position with t (1) i = z j for some pair (i, j), or to a position with t for some triple (a, i, j), a > 0 . Problem (9.1) is treated in [MV2]. By Lemmas 4.3 and 4.4 of [MV2], the map v F is regular at generic points of the hyperplanes z i = z j . (In fact, it is shown in Lemmas 4.3 and 4.4 of [MV2], that the number α of Corollary 9.2 is negative at generic points of possible irregularity corresponding to such hyperplanes, see [MV2].) Problem (9.2) of possible irregularity of v F at the places, where log Φ(z, · ) has a degenerate critical point, is treated in a standard way using integral representation (7.2). One replaces the sum in (7.1) by an integral over a cycle which can serve all z that are close to a given one, and then observes that the integral is holomorphic in z; see, for example, Sections 5.13, 5.17, 5.18 in [AGV]. Thus, to prove Theorem 7.1 we need to show that generic points of type (9.3) correspond to the points of W which are noncritical for the Wronski map and which have α 0. 9.3. Flag exponents. A point X ∈ W is an N-dimensional space of polynomials with a basis g 1 (u), . . . , g N (u) such that deg g i = λ i + N − i. Each polynomial g i is defined up to multiplication by a number and addition of a linear combination of g i+1 , . . . , g N . For any a ∈ C define distinct integers d X,a = (d 1 , . . . , d N ) called the flag exponents of X as follows. Choose a basis g 1 , . . . , g N of X (not changing the degrees of these polynomials) so that g 1 , . . . , g N have different orders at u = a and set d i to be the order of g i at u = a. We say that X is of type d if there exists a ∈ C such that d X,a = d. For every d, denote by W d ⊂ W the closure of the subset of points of type d. We are interested in the subsets W d ⊂ W which are of codimension one and whose points correspond to Problem (9.3). Such subsets will be called essential. Proof. The lemma is proved by straightforward counting of codimensions. If X is a point of W d 1+ , then for a suitable ordering of its root coordinates we have 2 . If X is a point of W d i+ , i > 1, then for a suitable ordering of its root coordinates we have t . If X is a point of W d i− , then for a suitable ordering of its root coordinates we have t 1 . Each of these properties is a problem of type (9.3). Lemma 9.4. Each essential subset is irreducible. Proof. It is easy to see that an essential subset is the image of an affine space under a suitable map. Lemma 9.5. Generic points of every essential subset are not critical for the Wronski map. Proof. The proof is similar to the proof in Proposition 8 of [EG] of the fact that the Jacobian det ∆ q is nonzero. 9.4. Proof of Theorem 7.1. 9.4.1. Let W d be an arbitrary essential subset. We fix a certain positive integer q. Then for any numbers r = (r 0 , r 1 , r 2 , . . . , r q ), such that r 0 ∈ C, r i ∈ R for i > 0, 0 < r 1 < r 2 < · · · < r q , we choose a point X r (ǫ, s) ∈ W depending on two parameters ǫ, s so that X r (ǫ, 0) ∈ W d and the point X r (ǫ, s) is nice for small nonzero s. The dependence of X r (ǫ, s) on r in our construction is generic in the following sense. For any hypersurface Z ⊂ W d we can fix r so that the curve X r (ǫ, 0) does not lie in Z. For any fixed r, we choose ordered root coordinates T r (ǫ, s) of X r (ǫ, s) and consider the corresponding Bethe vector ω (T r (ǫ, s)). We choose a suitable coordinate ω J (T r (ǫ, s)) of the Bethe vector and show that for small ǫ the coordinate ω J (T r (ǫ, s)) has nonzero limit as s → 0. That statement and Corollary 9.2 show that the corresponding summand in (7.1) is regular at W d . The proof that ω J (T r (ǫ, s)) has nonzero limit is lengthy. We present it for N = 2 and 3. The proof for N > 3 is similar. 9.4.2. Proof for N = 2. A point X ∈ W is a two-dimensional space of polynomials. The only essential subset is W (0,2) . This essential subset corresponds to the problem z λ 1 +λ 2 = t (1) λ 2 of type (9.3) (after relabeling the root coordinates). For any numbers r = (r 0 , r 1 , r 2 , . . . , r λ 2 +λ 1 −1 ), such that r 0 ∈ C, r i ∈ R for i > 0, 0 < r 1 < r 2 < · · · < r λ 2 +λ 1 −1 , we choose X r (ǫ, s) to be the two-dimensional space of polynomials spanned by Clearly, the dependence of X r (ǫ, s) on r is generic in the sense defined in Section 9.4.1. We consider the asymptotic zone 1 ≫ |ǫ| ≫ |s| > 0 and describe the asymptotics in that zone of the roots of g 2 and Wronskian Wr(g 1 , g 2 ). The leading terms of asymptotics are obtained by the Newton polygon method. If the leading term of some root is at least of order s 2 , we shall write that this root equals zero. Let us call the root coordinates t (1) λ 2 , z λ 2 +λ 1 exceptional, and the remaining root coordinates regular. For each regular root coordinate y the leading term of asymptotics of y − r 0 as ǫ → 0 has the form Aǫ B for suitable numbers A = 0, B. Lemma 9.6. The pairs (A, B) are different for different regular root coordinates. Proof. A proof is by inspection of the list. For each exceptional coordinate y the the absolute value of the difference y − r 0 is much smaller as ǫ → 0 than for any regular coordinate. We consider the asymptotic zone 1 ≫ |ǫ| ≫ |s| > 0 and describe the asymptotics in that zone of the roots of the polynomials g 3 , Wr(g 2 , g 3 ), Wr(g 1 , g 2 , g 3 ). We obtain the leading terms of asymptotics by the Newton polygon method. If the leading term of some root is at least of order s 2 , we shall write that this root equals zero. The roots of g 3 are of the form: 2 ∼ r 0 − ǫ r 2 , . . . , t where the dots denote the monomials which are not important for the leading asymptotics of the roots. The roots of Wr(g 2 , g 3 ) are of the form λ 3 +λ 2 ∼ r 0 − s. Let us call the root coordinates t (1) λ 3 +λ 2 , z λ 3 +λ 2 +λ 1 exceptional, and the remaining root coordinates regular. For each regular root coordinate y the leading term of asymptotics of y − r 0 as ǫ → 0 has the form Aǫ B for suitable numbers A = 0, B. Lemma 9.9. The pairs (A, B) are different for different regular root coordinates. Proof. A proof is by inspection of the list. For each exceptional coordinate y the absolute value of the difference y −r 0 is much smaller as ǫ → 0 than for any regular coordinate. For every σ the second product in (9.5), has well-defined limit The largest second products are those with For every σ, τ , the first product in (9.5) has well-defined limit ). That limit is an acceptable function of order i − r 0 ) + B(z i − r 0 )). The largest first products are those with i − r 0 ) + B(z i − r 0 )). Lemma 9.11. The functionω J (ǫ) is acceptable. Its order and leading coefficient are given by the formulas Proof. It is easy to see that if σ, τ are such that the second product in (9.5) has order . This implies the lemma. 9.4.4. Proof for N = 3 and W (0,2,1) . We study the problem t (1) λ 3 of type (9.3) (after relabeling the root coordinates). Let us call the root coordinates t (2) λ 3 +λ 2 exceptional, and the remaining root coordinates regular. For each regular root coordinate y the leading term of asymptotics of y − r 0 as ǫ → 0 has the form Aǫ B for suitable numbers A = 0, B. For each exceptional coordinate y the the absolute value of the difference y − r 0 is much smaller as ǫ → 0 than for any regular coordinate. Proof. Divergent summands in (9.8) are the summands with factors The divergent summands come in pairs. There are two types of divergent pairs. The first type has the form where C is a common factor. The second type has the form where C is a common factor. Each pair has well-defined limit as s → 0, lim s→0 (p Ckij ) . These limits will be called resonant pairs. Proof. A proof is by inspection of the list. For each exceptional coordinate y the the absolute value of the difference y − r 0 is much smaller as ǫ → 0 than for any regular coordinate. Then the order of lim s→0 q equals b and the order of lim s→0 (ω J ((T r (ǫ, s)) − q) is greater than b. Therefore, lim s→0 ω J (T r (ǫ, s)) is nonzero for small ǫ. For N = 3 and every essential subset W d , we proved that the Bethe vector is nonzero at generic points of W d and, hence, the number α of Corollary 9.2 is nonpositive. Thus, Theorem 7.1 is proved for N = 3.
2009-07-19T07:04:10.000Z
2009-07-19T00:00:00.000
{ "year": 2009, "sha1": "48239ceb3832dbbe10abfc3b23d4c418667f4a32", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.2969/aspm/06210475", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "48239ceb3832dbbe10abfc3b23d4c418667f4a32", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
134913883
pes2o/s2orc
v3-fos-license
Challenges Endangering Economic Viability and Ecological Sustainability of Crop Diversification in Himachal Pradesh Agricultural transformation in a mountainous state like Himachal Pradesh is circumscribed by mountain specificities, namely, inaccessibility, fragility, marginality, niche and human adaptation mechanism created by unique vertical dimensions that distinguish them from plains (Jodha, 1992). While the first three specificities contribute in varying degrees, inter alia, to physical isolation, distance and high transportation costs and, therefore, create formidable constraints for agricultural transformation, the latter two suggests the availability of potential for growing a variety of micro niche based high value cash crops. The proliferation of extremely small and tiny holdings on account of factors like continuing population pressure on land coupled with general lack of rural non-farm employment opportunities, liberal laws of inheritance and resultant sub-division of holdings, etc. are the major constraints in boosting agricultural production and productivity and raising the levels of living of a typical Indian farmer. The problem is more International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 10 (2018) Journal homepage: http://www.ijcmas.com Introduction Agricultural transformation in a mountainous state like Himachal Pradesh is circumscribed by mountain specificities, namely, inaccessibility, fragility, marginality, niche and human adaptation mechanism created by unique vertical dimensions that distinguish them from plains (Jodha, 1992). While the first three specificities contribute in varying degrees, inter alia, to physical isolation, distance and high transportation costs and, therefore, create formidable constraints for agricultural transformation, the latter two suggests the availability of potential for growing a variety of micro niche based high value cash crops. The proliferation of extremely small and tiny holdings on account of factors like continuing population pressure on land coupled with general lack of rural non-farm employment opportunities, liberal laws of inheritance and resultant sub-division of holdings, etc. are the major constraints in boosting agricultural production and productivity and raising the levels of living of a typical Indian farmer. The problem is more ISSN: 2319-7706 Volume 7 Number 10 (2018) Journal homepage: http://www.ijcmas.com The study was conducted in all the four zones of Himachal Pradesh with one representative district from each zone. All is, however, not well with the ongoing process of crop diversification in the state. Some impending challenges those endanger the economic viability and ecological sustainability of cash crops were lack of irrigation facilities, small land holding, fluctuating price, inadequate storage facilities, soil erosion, loss of water holding capacity, lack of processing facilities and lack of proper knowledge about the application of insecticides pesticides and fertilizers. The cultivation of high value crops, have started showing increasing symptoms of unsustainability due to, among other things, falling soil fertility, erratic weather conditions and the emergence of numerous insects, pests and diseases. The adoption of same cropping sequence year after year has caused the loss of micronutrients leading to deterioration in the overall soil health. The high incidence of diseases has led to an excessive use of agrochemicals that has given rise to a vicious cycle of falling productivity-more use of chemicals-further fall in productivity, and so on. This has not only escalated the production cost but has also affected environment and bio-diversity adversely. serious in mountainous states like Himachal Pradesh where only 13 per cent of the total geographical area is available for cultivation. There is a preponderance of tiny holdings in the state; about 88 per cent of the holdings are small and marginal owning less than two hectares of land and accounting for about 54 per cent of the operated area. The overall average size of holdings is 1.00 ha (Census of Himachal Pradesh, 2011). Therefore, improving the production and productivity of these tiny holdings and, in the ultimate analysis, the level of living of marginal and small farmers is a major challenge for the planners and policy makers. An acceptable and meaningful transformation will be expected to improve productivity, build resilience to farming systems, improve livelihoods and reduce harm to the environment. Such critical practices and techniques include; crop diversification through rotations and intercropping, agroforestry, conservation tillage, cultivation of drought-resistant crops, water harvesting and integrated soil fertility management (Faures et al., 2013). Truscott et al., (2009) considers crop diversification an environmentally sound alternative to the control of parasites and in the maintenance of soil fertility in agriculture. The crop diversification towards selective high value crops including fruits and vegetables, compatible with the comparative advantage of the region, is recommended as an effective strategy in raising incomes, generating employment opportunities and alleviating poverty among small and marginal households (Vyas, 1996;Joshi et al., 2007). According to Njeru (2013) crop diversification not only allows more efficient utilization of agro ecological processes, but also provides diversity for human diet and improves income which improves the purchasing power for the household for buying other foods. Materials and Methods Data was collected from all the four agroclimatic zones of Himachal Pradesh for the agricultural year 2016. One district from each agro-climatic zone was selected purposively. Bilaspur (Shivalik Hill Zone), Solan (Mid Hill Zone), Kullu (High Hill Zone) and Kinnaur (Cold Dry Zone) districts was chosen for reasons of high significance and scope of introducing diversification activities into the farming systems of area. Two blocks from each selected district were purposively selected, one relatively highly diversified and one relatively least diversified in consultation with district level officers. For the selection of sample, three stage stratified random sampling design was adopted; with development block as a primary unit, village as a secondary unit and sampled farmer as an ultimate unit. Accordingly, 30 households were selected from each block thus making total sample size of 240 from four agro-climatic zones of Himachal Pradesh. Garrett's Ranking Technique was used to prioritize the imminent challenges/constraints. The per cent position of each rank was converted into scores using Garrett's table. Garrett's ranking technique For each constraint score of the individual respondent were added. Thus, mean score for each constraint was ranked by assigning higher rank (1) to highest value Garrett mean score. Economic viability and ecological sustainability Agriculture is one of the sectors significantly affected by climate change and variability. Seasonality dynamics, increased frequency of droughts (especially mid-season dry spells), increased temperatures, altered patterns of precipitation and intensity are some of the extreme weather events. Declining crop yields, increased agricultural risks, diminishing soil fertility and environmental degradation are some of the main challenges which continue to threaten societal goals of improving food, income and nutrition security especially in smallholder farming. It, therefore, calls for a significant transformation in agriculture to withstand the emerging challenges (Tewari et al., 1992). Farm profitability from crops were considered to be the indicator of economic viability. Farm productivity was measured through physical yield of crops and crop yield data were collected through a household survey. Farm profitability was then determined based on financial return per unit of land and the financial return was analyzed through Benefit-Cost ratio. It was considered that if the benefit-cost ratio found greater than 1 then that crop is economically viable in that area. All the high value crops were found viable in all the four agroclimatic zones of Himachal Pradesh. Constraints/ Challenges High value crops like fruits, vegetables and flowers are confronted with numerous constraints due to their highly perishable nature, high-tech requirements, costly planting material/seeds, inputs, etc. Thus, for encouraging the production and efficient marketing of these crops, various problems and constraints in their production and marketing with which they are confronted with, are needed to be identified. Production constraints In production constraints at overall level lack of irrigation facilities was the major problem found in the study area (Table 1). This was the major problem with highest rank in Zone-I, Zone-II and Zone-III. It was expected that the unreliable rainwater would impose severe limitations on the agricultural production. Unreliable rainfall is a major constraint in the diversification of agriculture. While in Zone-IV small land holding was the major problem. It was also highly severe problems at state level. There is a preponderance of small land holdings in Himachal Pradesh. According to Agricultural Census of Himachal Pradesh (2010-11), small and marginal farmers together constitute 88% of the total population of the state. The average size of holding was 1.0 hectares in 2010-11. Fluctuating production, unfertile holding and cash shortage when needed also contributed to production constraints. Problems like nonavailability of skilled labour at operation period, costly labour, non-availability of quality seed and planting material, fertilizer and Plant protection chemicals not available in time were also observed. India lacks modernized infrastructure for promoting the agriculture sector. Rudimentary policies and old fashioned equipment's and practices used by farmers in India are not sustainable, resulting in low productivity for many agricultural commodities (Dwivedy, 2011). Marketing constraints Agri-commodity sector is still lacking in a well-developed, organised and integrated market for spot trading of commodities. Farmers quite often are faced with a risk of what to grow and when and where to sell. Any development in this front will directly facilitate the growth of the commodity futures markets also on those agri products. The agricultural products prices are highly volatile. A farmer is highly susceptible to price fluctuations both of farm produces and farm inputs (Kumar, 1991, Negi et al., 1997. Price fluctuation is a multifaceted problem attributed by various factors which, when combined, culminate in dangerous consequences for the most vulnerable. Although high prices can technically be good news for farmers, price fluctuation is extremely dangerous, as farmers and other agents in the food chain risk losing their investments if prices fall. This severe marketing constraint was noticed during the survey in all the zones and is thus, ranked first (Table 2). Distant markets are playing their role in proving a constraint for diversification in agriculture and are hence, ranked second. High transport charges and lack of all-weather roads, malpractices by traders at the time of auction and inadequate storage facilities were another highly severe problems related to marketing which were ranked 3 rd , 4 th and 5 th respectively at state level in the study area. Traditional harvesting and storage conditions of Indian farms and farmers result in large proportions of crop wastage. It has been estimated that crop wastage due to inefficient storage is 7 per cent of annual grain production per year in India. This percentage accounts for 21 million tonnes of wheat grain alone, as India lacks proper cold storage and cold chain transportation (Suprem et al., 2013). Ecological constraints The farming community is facing several threats due to environmental changes and pollution. Crop damages due to climatic changes are putting a lot of pressure on the farmers. The cultivation of high value crops, especially horticultural crops, has started showing increasing symptoms of unsustainability due to, among other things, falling soil fertility, erratic weather conditions and the emergence of numerous insects, pests and diseases. The adoption of same cropping sequence year after year has caused the loss of micronutrients leading to deterioration in the overall soil health. Land use pattern in the state of Himachal Pradesh in the Indian Western Himalayas has been undergoing rapid modifications due to changing cropping patterns, rising anthropogenic pressure on forests and climate changes. Sharma (2011) reported that the emerging challenges like rapid depletion of soil fertility, changing weather and climatic conditions, increasing erosion of comparative advantages, increasing competition from cheaper imports, inadequate infrastructural facilities and old age of crop bearing apple plantations pose a serious threat to the economic viability and ecological sustainability of the process of crop diversification in Himachal Pradesh. In the study area availability of water resources showed a deep impact in all the zones (Table 3). However at state level problem of availability of water resources was ranked first and incidence of diseases and insect pests attack was ranked second. There has been an ever increasing pressure on the natural resources due to the rising population. Loss of soil fertility, soil erosion, loss of water holding capacity, soil contamination with chemical fertilizers, pesticides and others, loss of genetic diversity of planting material, loss of soil organisms/ predators were the other problems which were ranked 3 rd 4 th , 5 th , 6 th, 7 th and 8 th respectively. Other challenges Consumption of processed products started since time immemorial. The production was mainly for private household consumption and commercial production started very late. The processing facilities were very much limited. Absence of cold storage for storing agriculture produce was major problem in the study area (Table 4). Lack of proper knowledge about the application of insecticides pesticides and fertilizers was the second most important constraint at overall level. The farmers know little to nothing about the pesticides they use. They are solely reliant on information from input dealers. Approved uses, correct doses and waiting periods are not mentioned on the labels of pesticides bottles or packets. The labels state that the leaflet given along with the pesticide must be consulted before use; however, most of the farmers ignored the same. Costly planting material, unreliable sources of seed/ planting material, irregular monsoon, large initial investment needed, wild animals menace, lack of policy support and less experience in the field which were ranked in their ascending order from 3 to 9 respectively in Himachal Pradesh. Lack of availability/adoption of advanced technology suitable for hill agriculture is one of the main constraints in crop diversification in the state of Arunachal Pradesh (Mishra, 2006). The transition towards high-value agriculture is not without constraints, especially for smallholders. If the high-value commodities are products that the farmers have not grown before, the farmers may lack necessary information on production methods, marketing opportunities, and the probable distribution of net returns. This problem is particularly acute when the target consumers have very specific quality requirements and/or strict food safety requirements (Minot and Roy, 2006). The diversification of hill agriculture can provide better choices and quality options for sustaining the livelihoods of hill farmers but what is necessary in this process is to develop a clear understanding of the ecologically and economically sustainable farming options. Highly severe constraints related to production were lack of irrigation facilities and small land holdings. In the marketing of fruits and vegetables in Himachal Pradesh fluctuating price was observed as major constraint. Distant markets, high transport charges and lack of all-weather roads were other highly severe problems. Major ecological constraints were problems in availability of water resources and incidence of diseases and insect pests attack. The emerging challenges like lack of proper knowledge about the application of insecticides pesticides and fertilizers, rapid depletion of soil fertility, changing weather and climatic conditions, increasing erosion of comparative advantages, increasing competition from cheaper imports, inadequate infrastructural facilities and old age of crop bearing apple plantations pose a serious threat to the economic viability and ecological sustainability of the process of crop diversification in the state.
2019-04-27T13:12:08.830Z
2018-10-10T00:00:00.000
{ "year": 2018, "sha1": "2aeea28ac89cc89e3176ada18dc06db4c3dd6e72", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-10-2018/Nisha%20Devi%20and%20R.S.%20Prasher.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ebe893771bedd5ba114705232c9b68a743828cf2", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
234286311
pes2o/s2orc
v3-fos-license
Evaluation of microstructural, magnetic properties and surface/near-surface chemical state analysis of Mn-CuFe2O4 nanoparticle Nanocrystalline Mn substituted CuFe2O4 nanoparticles (MCFNPs) were synthesized using urea and egg white. The effects of heat treatment on crystal structure and magnetic properties have been studied using X-ray diffractometer (XRD) and Vibrating Sample Magnetometer (VSM). The single-phase cubic spinel structure of as synthesized MCFNPs was recognized from XRD profile. There are some impurity peaks in the annealed samples, which are the decomposition of the ferrites at higher annealing temperatures to the α-Fe2O3 phase. The crystallite size and Lattice parameter of the samples increases with annealing temperature. The crystallite sizes of the MCFNPs were found in the range ~10 to 55 nm. The morphology and particle size of the sample (annealed at 900 °C) have been recorded through SEM and TEM. The secondary non-magnetic impurity phase influences the magnetic nature of the samples. The saturation magnetization (Ms) decreases at a temperature of 600 °C due to the presence of non-magnetic α-Fe2O3 phase. The surface / near-surface chemical states of the 900 °C annealed MCFNPs were analyzed using XPS within a range of 0-1000eV binding energies. Introduction Spinel ferrite is of particular interest due to its potential use in a variety of fields, from scientific and electronic applications to sindustrial applications [1]. Fine magnetic particles are now the subject of study due to their many technical applications. One of the most interesting uses of magnetic materials is treatment with hyperthermia, which is known to be an alternative treatment for chemotherapy, radiotherapy and surgery in cancer therapy [2]. The structure, size and morphology are related to the conditions of preparation and strongly determine the properties of MnFe2O4. The preparation of NiFe2O4, MnFe2O4, Ni and Zn ferrites has been reported by several groups [3][4][5]. Because of its excellent magnetic properties, along with electrical and semiconducting properties, many researchers are interested in studying the various physical properties of CuFe2O4 [6]. The magnetic activity of CuFe2O4 has attracted a great deal of interest and has been the subject of extensive studies. Recently, using a sonication method, M.A.S. Amulys et al., [7] synthesized nanostructured spinel MnFe2O4 with different grain sizes ranging from 16 to 24 nm and studied their photocatalytic activities. H. B. Desai et al., [8] reported MnFe2O4 with an auto combustion process of 40 nm grain size for photocatalytic applications. S. V. Bhandare, et al., [9] prepared MnFe2O4 nanoparticles by sol gel combustion synthesis and analyzed their magnetic properties for different annealing temperature. For the synthesis of magnetic nanoparticles, many methods have been developed, including thermal decomposition, co-precipitation, polyol, reverse microemulsion, microwave combustion, and combustion methods [10]. The combustion method is a very successful and promising technique among the preparation methods, because the particles produced by this method are pure and uniform with a limited distribution of sizes. In this research, a simple urea and egg white assisted combustion method was favored over the synthesis of Mn substituted CuFe2O4 nanoparticles. The goal was to research the effects of heat treatment on the size of the particles and magnetic properties of the CuFe2O4 nanoparticles substituted by Mn. The thorough investigation of the properties Int. Res. J. Multidiscip. Technovation, 3(2) (2021) 14-19 | 15 impacted by the size is analyzed and the findings are summarized. Synthesis Mn substituted CuFe2O4 nanoparticles have been prepared by urea and egg white assisted combustion method. In this present work, Urea and egg white were used as a fuel to prepare nanoparticles in combustion process. Analytical grade (Merck) Mn (NO3)26H2O, Cu(NO3)26H2O, Fe (NO3)39H2O were used as raw materials to prepare Mn-CuFe2O4 nanoparticles. These materials are taken at appreciable molar concentrations to maintain stoichiometric as 1:2. The solutions of precursors are mixed with 50 ml of egg white solution which is acting as a chelating agent. This solution mixture was thoroughly stirred for 1 h. The mixed solution was further stirred under heating at 100°C using magnetic stirrer until to get desired final ferrite powder. The procedure has been repeated with 50% of urea instead of egg white solution. The ferrite powder obtained was milled into a fine powder in an agate mortar and a part of the powder was heat treated in the air at 600 °C and at 900 °C. Characterization techniques The list of characterizations and instruments details are given in table 1. Results and discussion Structure evaluation of the nanocrystallized products of MCFNPs synthesized using urea and egg white annealed at 600°C and 900°C was performed by XRD and the diffraction spectra is presented in Fig. 1a & Fig.2a. All the diffraction peaks observed were indexed by the JCPDS card indicating that the products of the MCFNPs are the cubic spinel structure. The appearance of secondary impurity peaks from the spectrum XRD indicates that the α-Fe2O3 phase was decomposed at 600 °C [11][12][13]. The intensity of the secondary peaks slowly vanished at a higher temperature of 900 °C. The diffraction peaks become narrower and sharper, indicating an improvement in particle size and crystallinity after annealing. The average crystallite size of the products was determined using the formula Debye-Scherer (t=0.9λ/β cos θ). The lattice constant (a) was determined from the XRD profile using the formula a2= d2/ (h 2 + k 2 + l 2 ). The crystalline size (t) and the lattice constant (a) of the products are shown in Table 2. The crystalline sizes of the MCFNPs samples are located in the range 9.4 to 46.6 nm for urea-assisted synthesis and 14.9 to 54.8 nm for egg white induced synthesis. Typical external morphologies of the 900 °C annealed MCFNP samples recorded by SEM are shown in Fig.1b and 2b. The morphology of the samples (Fig. 1b and 2b) of MCFNPs has irregular and spherical shaped particles with a slight agglomeration, which may have the effect of replacement of Mn, defects and also the effect of annealing [12]. As shown in Fig.1c and Fig2c, the transmission electron microscope (TEM) examined the microstructure and particle size of the 900°C annealed MCFNPs samples. The microstructure, size and shape of the products identified by the SEM morphologies can be clearly confirmed. Due to comparatively higher temperatures and interactions between magnetic nanoparticles, agglomeration can be understood at higher temperatures. There is also an inevitable grade of agglomeration at higher temperatures [12]. The particle sizes of the MCFNPs are compatible with the XRD research findings. At the top right of the Figs. 1c & 2c, the corresponding selected area electron diffraction (SAED) pattern of MCFNPs are shown. The superimpositions of the bright spots demonstrate with equal lattice arrangement the strong crystalline existence of the samples. Fig.2d displays room temperature magnetic measurements by vibrating sample magnetometer (VSM) of MCFNPs prepared using urea and egg white induced combustion synthesis. Basic magnetic parameters such as saturation magnetization (Ms) and coercivity (Hc) of as-synthesized and annealed at different temperatures (600 ° C and 900 ° C) of MCFNP are shown in Table 2. The size of the particle and the purity of the phase have a significant role to play in the magnetic parameters. From the findings of XRD, the size of the particles increases with an increase in annealing procedure. Generally, the saturation magnetization of spinel ferrite nanoparticles increases with a rise in size due to the effect of heat treatment. Due to the presence of secondary phase (non-magnetic) at higher annealing temperature, magnetization decreases at 600 °C. At 900 °C, the saturation magnetization of the annealed MCFNPs is higher than that of 600 °C, which may be the particle size, and the secondary peaks of the products may vanish [12]. The higher coercive values of the 600 ° C annealed MCFNPs using urea and egg white are 236.9 G and 308.5 G, which may be attributed to the difference in the anisotropic field of the ions present in the sample by thermal annealing. Fig. 1d & Fig.2d displays room temperature magnetic measurements by vibrating sample magnetometer (VSM) of MCFNPs prepared using urea and egg white induced combustion synthesis. Basic magnetic parameters such as saturation magnetization (Ms) and coercivity (Hc) of as-synthesized and annealed at different temperatures (600 ° C and 900 ° C) of MCFNP are shown in Table 2. The size of the particle and the purity of the phase have a significant role to play in the magnetic parameters. From the findings of XRD, the size of the particles increases with an increase in annealing procedure. Generally, the saturation magnetization of spinel ferrite nanoparticles increases with a rise in size due to the effect of heat treatment. Due to the presence of secondary phase (non-magnetic) at higher annealing temperature, magnetization decreases at 600 °C. At 900 °C, the saturation magnetization of the annealed MCFNPs is higher than that of 600 °C, which may be the particle size, and the secondary peaks of the products may vanish [12,14,15]. The higher coercive values of the 600 ° C annealed MCFNPs using urea and egg white are 236.9 G and 308.5 G, which may be attributed to the difference in the anisotropic field of the ions present in the sample by thermal annealing. Conclusion The structural, magnetic and surface chemical state analysis of MCFNPs prepared by using urea and egg white were investigated. The heat treatment effects on particle size and phase purity of the MCFNPs were Int. Res. J. Multidiscip. Technovation, 3(2) (2021) 14-19 | 18 documented. The existence of secondary impurity phase, due to the decomposition of MCFNPs at higher annealing temperature recorded using XRD profiles. Spherical shaped agglomerated magnetic nanoparticles in the range of 40 to 50 nm examined through TEM. The decrement of secondary (nonmagnetic) phase at higher annealing (900℃) temperature leads to the better magnetization than the sample annealed 600 ℃ is evident that the magnetic parameters influenced more by the phase purity of the products. The binding energies of presented elements were labeled from XPS spectra, which clearly show the surface chemical states of MCFNPs.
2021-05-11T00:06:03.179Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "ab75e5bd1f31514569296032780759f33aec0a7d", "oa_license": "CCBY", "oa_url": "https://journals.asianresassoc.org/index.php/irjmt/article/download/337/296", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "36171c2365752f2415a3a2163f1f46647f38693f", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
119686024
pes2o/s2orc
v3-fos-license
Non-homogeneous boundary value problems for fractional diffusion equations in $L^2$-setting In the present article, we study the diffusion equations with fractional time derivatives. The aim of this paper is to investigate the best possible regularity for the initial value/boundary value problems with non-homogeneous Dirichlet boundary data. The main tool we use here is called the transposition method. Introduction Let Ω be a bounded domain of R d with C 2 boundary Γ := ∂Ω and set Q := Ω × (0, T ) and Σ := Γ × (0, T ). We consider the following initial value/boundary value problem for a partial differential equation with the fractional derivative in time t: u(·, 0) = 0 in Ω (1.1) with 0 < α < 1. Here ∂ α t denotes the Caputo derivative, which is defined by and Γ(·) is the Gamma function (see Podlubny [4]). The differential operator A is given by and the coefficients satisfy the following: where µ > 0 is constant. The function g is given on Σ. In the present paper, we study the regularity of the solution to (1.1) in the sense of Sobolev spaces. As for this problems, Lions and Magenes [3] showed the result for the parabolic equations. As for the spaces with negaive exponents, we define The duality paring between H −r,−s (Σ) and H r,s ,0 (Σ) is denoted by ψ, u r,s for ψ ∈ H −r,−s (Σ) and u ∈ H r,s ,0 (Σ). We define the operator ∂ ν A : H s (Ω) → H s−3/2 (Γ), s > 3/2, as where ν(x) = (ν 1 (x), . . . , ν d (x)) is the outward unit normal vector to Γ at x. Then for u ∈ H r,s (Q) with r > 3/2, the trace theorem (Theorem 2.1 in Chapter 4 of Lions and Magenes [3]) yields In order to define the weak solution of (1.1), we introduce the dual sytem; where D α t is the backward Riemann-Liouville fractional derivative, which is defined by Moreover I ν T − denotes the backward integral of order ν; By the same argument as Chapter 4 in Bajlekova [1], for any f ∈ L 2 (Q) there exists a unique (2.4) Henceforth we will denote this solution by v f . Now we apply (2.1) to v f ∈ H 2,α (Q) and obtain ∂v f Now we are ready to define the weak solution of (1.1); holds for any f ∈ L 2 (Q). The main result of this paper is as follows; Now we roughly describe the strategy of the proof. It is not difficult to show the unique existence of the weak solution of (1.1) for g ∈ L 2 (Σ), but the regularity of H 1/2,α/4 (Q) cannot be directly deduced. Therefore we first show the follwoing two results; (i) Regularity of the solution for g ∈ H −1/2,−α/4 (Σ). After showing (i) and (ii), we obtain the regularity for g ∈ L 2 (Σ) by interpolating the above results. The result for (i) can be easily shown. Indeed, from this definition, we can immediately deduce the following proposition; Thus the mapping is bounded, and so is Therefore the Riesz's representation theorem yields the unique existence of u ∈ L 2 (Q) such that holds for any f ∈ L 2 (Q). Thus we have proven the unique existence of weak solution. Moreover for any f ∈ L 2 (Q) we have where we have used (2.9) in the last inequality. Therefore we have (2.8). Thus we have proved part (i). In the next section, therefore, we will consider case (ii). Moreover the mapping is a continuous surjection. For the proof of this proposition, we prepare the following lemma; Lemma 3.2 (Trace Theorem). Let X and Y be Hilbert spaces such that X is embedded to Y densely and continuously. If u ∈ L 2 (R + ; X) ∩ H r (R + ; Y ) with r > 1/2, then Moreover, the mapping is a continuous surjection. For this lemma, see Theorem 4.2 in Chapter 1 of [3]. (3.3) For (3.3), Gorenflo, Luchko and Yamamoto [2] showed the L 2 -maximal regularity. In their setting, the Caputo derivative ∂ α t equipped with the initial value u(0) = 0 is formulated as an operator in L 2 (0, T ) with its domain given by By (3.1) and (3.2), this can be rewritten as Thus we can see that if (3.3) has a "solution" in D(∂ α t ), then the initial condition u(·, 0) = 0 is satisfied in a weaker sense. They also revealed that the above operator ∂ α t is essentially equivalent to the Riemann-Liouville derivatives, which were already discussed in [1]. Anyway we obtain the following result; Lemma 3.3. Let 0 < α < 1 and F ∈ L 2 (Q), then (3.3) has a unique solution u ∈ H 2,α For the proof of this lemma, see Theorem 4.3 in [2]. Thus we have completed the proof. Proof of the main result In this section, we complete the proof of Theorem 2.1 by interpolation. Proof of Theorem 2.1. Let π be the operator which operates the boundary data g to the weak solution u of (1.1). Then, by Propositions 2.2 and 3.4, we have where L(X, Y ) denotes the set of linear and bounded operators from X to Y . By Proposition A.1, the operator π also belongs to and therefore we have π ∈ L(L 2 (Σ); H 1/2,α/4 (Q)). Thus we have completed the proof. Appendix A. Interpolation Throughout this article, we often use the word "interpolation" as a complex interpolation defined bellow. As for the detailed argument on this topic, we can refer to Triebel [6], Yagi [7] and the references therein. On the other hand, in some classical works such as Lions and Magenes [3], the "interpolation" of two Hilbert spaces is defined as the domain of fractional powers of positive and self-adjoint operator. We will see that these two kinds of definitions coincide with each other (see Proposition A.2). Therefore, we can refer to [3] and use some of their results (e.g., Theorem 4.2 in Chapter 1 of [3]) without any confusion. In this section, we recall the definition of complex interpolation of Banach spaces and summarize their fundamental properties. Let X i be a Banach space equipped with the norm · X i (i = 0, 1) and suppose that X 1 is embedded in X 0 continuously and densely. Let S be defined by We say that a function F : S → X 0 belongs to H(X 0 , X 1 ) if and only if the following conditions (H1)-(H3) are satisfied; (H1) F is analytic in S. (H2) F is bounded and continuous in S. (H3) R ∋ y → F (1 + iy) ∈ X 1 is bounded and continuous. It is known that H(X 0 , X 1 ) is a Banach space with the norm · H given by For each 0 ≤ θ ≤ 1, we define the space [X 0 , X 1 ] θ by [X 0 , X 1 ] θ := {u ∈ X 0 ; u = F (θ) for some F ∈ H(X 0 , X 1 )} Moreover [X 0 , X 1 ] θ is a Banach space with the norm · θ defined by By the interpolation, we can show various kinds of "intermediate properties". For example, if a linear operator T is bounded from X 0 into Y 0 and from X 1 into Y 1 at the same time, then we can deduce that T is also a bounded operator from [X 0 , X 1 ] θ into [Y 0 , Y 1 ] θ for any 0 < θ < 1. Proposition A.1. Let X 1 (resp. Y 1 ) be embedded to X 0 (resp. Y 0 ) densely and continuously. Then for any 0 < θ < 1, and we have Moreover we can also characterize the domain of fractional power of operators; Proposition A.2. Let X be a Hilbert space and A : X → X be a positive and self-adjoint operator. Then we have with isometry. Here we note that [X, D(A)] θ stated above coincides with [D(A), X] 1−θ in the notation by Lions and Magenes [3].
2015-01-07T13:18:54.000Z
2015-01-07T00:00:00.000
{ "year": 2015, "sha1": "b724dd28b3dd844d350939c4c97f17b118cf2b51", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b724dd28b3dd844d350939c4c97f17b118cf2b51", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
73569206
pes2o/s2orc
v3-fos-license
The comparison of the 3-fluid dynamic model with experimental data The article considers a way to compare large bulks of experimental data with theoretical calculations, in which the quality of theoretical models is clearly demonstrated graphically. The main idea of the method consists in grouping physical observables, represented by experiment and theoretical calculation, into samples, each of which characterizes a certain physical process. A further choice of a convenient criterion for comparing measurements and calculations, its calculation and averaging within each sample and then over all samples, makes it possible to choose the best theoretical model in the entire measurement area. Published theoretical data of the three-fluid dynamic model (3FD) applied to the experimental data from heavy-ion collisions at the energy range $\sqrt{s_{NN}}\,=\,2.7 - 63$ GeV are used as example of application of the developed methodology. When analyzing the results, the quantum nature of the fireball, created at heavy ion collisions, was taken into account. Thus, even at energy $\sqrt{s_{NN}}\,=\,63$ GeV of central collisions of heavy ions, there is a nonzero probability of fireball formation without ignition of the quark-gluon plasma (QGP). At the same time, QGP ignition at central collision energies above at least $\sqrt{s_{NN}}\,=\,12 GeV occurs through two competing processes, through a first-order phase transition and through a smooth crossover. That is, in nature, these two possibilities are realized, which occur with approximately the same probabilities. Introduction Articles devoted to the study of the formation of Quark-Gluon Plasma (QGP) in heavy ion collisions contain a huge amount of experimental and theoretical material [1][2][3][4][5][6][7][8][9][10]. If some criterion is used to assess the quality of the description of experimental data by some theoretical model, then the question arises of systematizing a large set of calculated criteria. Usually, one type of observables (for example, particle spectra) is analyzed separately from others. This leads to a contradicting interpretation of the experimental data. We came to conclusion that need the quantitative characteristics of the degree of agreement between theoretical models and experiments, which having a large amount of observational material, expressed by a single number for each model and for the each energy of heavy-ion Now, in order to compare the T 1 model with another set of experimental data s 2 of physical observables {B 1 , ..., B k } (related to other types of particles or physical processes), analogue (3) should be calculated: 1 ) have approximately the same value, then the sum (3) or (4) will lose some terms in the numerator that have the fewest data points. As a result, we lose some information about the physical processes under study, and we compare the truncated data sets. Moreover, using any weighted averaging, we reduce the set of observables, which distorts the analysis. To avoid this truncation of data, a modification has been made (1 -4): In formulas (7-8), we average criteria over the number of observables in each set of observables. Such averaging gives possibility for correct calculation of criteria for two sets inside one model T 1 . To obtain criterion of comparison of the model T 1 with united sets s 1 and s 2 , averaging of criteria over these sets is needed: Since each set of physical observables belongs to its own kinematic area, the arithmetic averaging of the criteria gives the final criterion which is uniformly distributed over the union of the kinematic areas of all sets of observables. Repeating the same analysis (5 -11) for another model T 2 with respect to the same N sets of physical observables makes it possible to compare criteria, for example, Application to real experiments and theories We take published results of three-fluid dynamic (3FD) model which uses three versions of equation of state (EoS) of nuclear matter created in heavy-ion collisions [12], [13]: T 1 is a 3FD model with 2-phase EoS, that is with first-order phase transition to deconfined state of nuclear matter, T 2 is a 3FD with EoS considering smooth crossover transition to deconfined state, T 3 is a 3FD with purely hadronic EoS. The 3FD model was applied to the experimental data for central heavy-ion collisions from AGS to RHIC energies NN s = 2.7 GeV  62.4 GeV [14], [15]. We have applied formulas (6 -8, 10, 11), concerning the relative criteria, to the following sets of physical observables: where Y particle is a total yield of given particle, calculated by taking integral from particle rapidity distributions dN/dy of [14], [15]. dY particle is a midrapidity multiplicity 0 y dN dy  of given particle taken from Fig. 9 of [15]. For all these physical observables, number of data points is one: n i = 1. Relative criteria are expressed as a percentage by multiplying the calculated values by 100 and the results are shown in Fig. 1. At first, for charged particles, a separate averaging of relative criteria over each isospin group (Q) was done for protons: , for kaons: The same procedure was done for directed flows v 1 (y) for protons, antiprotons and pions from mid-central heavy-ion collisions at energies NN s = 2.7  27 GeV, which were taken from Fig. 1-3 of [16]. Criteria were calculated by (5 -11). Both types of criteria show similar behavior (Fig. 2). Relative criteria were no longer multiplied by 100. For each collision energy, the following sets of physical observables were taken: Discussion It can be seen from Fig. 1 that all three versions of 3FD model are in poor agreement with experimental data in the central heavy ion collisions energy range of NN s = 5  9 GeV. This may indicate that in this energy range the equation of state of nuclear matter has other parameters than those accepted in the 3FD model. If a phase transition of nuclear matter from the hadronic phase to the quark-gluon phase occurs somewhere, then at energies below NN s = 5 GeV and higher NN s = 9 GeV. Between these energies, nuclear matter is neither in a purely hadronic nor in a quark-gluon state. In [17] it was shown that difference in the measurements of yields of hyperons by two experiments NA49 and NA57 is caused by quantum nature of the fireball created in heavy ion collisions, that is with probability around 50% occurs creation of fireball Taking into account that each of the three versions of the 3FD model does not contradict physical laws, it can be assumed that there are three real scenarios for the evolution of the fireball in nature, that is, we have three quantum states of the fireball: Hadronic , 2 phase , crossover , where the last two represent the QGP state via superposition. As a result, we must represent an arbitrary quantum state of fireball through a superposition of these three states, which is shown in Fig. 3. The amplitudes of these quantum states depend on the energy and centrality of heavy ion collisions. Thus, looking on Fig.1 we can assume that at energies The suspiciously better agreement hadronic version of 3FD model at energies NN s > 20 GeV is explained in [18] by an incorrect choice of the parameters of deconfined nuclear matter. Conclusion The shown method of comparing theoretical predictions with a large set of experimental data provides a clear opportunity to assess the quality of the theory and choose the best theory among many theoretical models. In experiments on heavy ions, a large number of physical observables are measured for different types of elementary particles. Each of the existing theoretical models describes the experiment well only in a narrow range of experimental data, competing with other models almost on an equal footing in the rest of the measurement region. In the vast majority of cases, even a comparison of several dozen experimental spectra with theoretical calculations is carried out by eye, which makes it impossible to make an unambiguous conclusion about the quality of theories and the choice of the best of them. The method proposed in this article is extremely simple -after grouping the data and calculating the appropriate criterion for each spectrum, it is necessary to average the obtained criteria. Further visualization of the data in the form of, for example, the distribution of the average criterion over energy, gives an unambiguous idea of the quality of the model over the entire measurement range. At the same time, when exploring a quantum object, we must understand that nature is richer in the manifestation of physical phenomena than the human imagination. Competing models that assume different evolution of a physical object might be represented in nature as different quantum states of an object that are realized under the same conditions. Thus, taking into account the quantum nature of the fireball, we can assume that even at energy of NN s = 63 GeV of central heavy ion collisions, there are such events where the QGP does not ignite and during lifetime the fireball consists only of pure hadronic matter (not deconfined). The probability of such an event is small, but not zero. In the experiment, this is not taken into account and it is assumed by an unspoken, unsupported agreement that such events do not happen at such energy. On the other hand, at an energy of central heavy-ion collisions of around NN s = 12 GeV, the number of events without QGP ignition can be comparable to the number of events with QGP ignition. The ignition of the QGP itself can proceed according to two scenarios -with a first-order phase transition and through a smooth crossover. And again, according to an unspoken agreement, not supported by anything, it is believed that only one scenario of the process is realized in nature. The quantum nature of the fireball is somehow completely ignored. At the same time, the question of the possibility of applying an appropriate trigger in an experiment for selecting events with different fireball quantum states remains open.
2015-08-13T12:36:13.000Z
2015-08-13T00:00:00.000
{ "year": 2015, "sha1": "51b860247ad946f3ebbcee291271d7a5b5364bdd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ddb411697aed84ef89c4a5dc4e8fc2fb22caf27d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257707461
pes2o/s2orc
v3-fos-license
Constraint Release Rouse Mechanisms in Bidisperse Linear Polymers: Investigation of the Release Time of a Short-Long Entanglement Despite a wide set of experimental data and a large number of studies, the quantitative description of the relaxation mechanisms involved in the disorientation process of bidisperse blends is still under discussion. In particular, while it has been shown that the relaxation of self-unentangled long chains diluted in a short chain matrix is well approximated by a Constraint Release Rouse (CRR) mechanism, there is no consensus on the value of the average release time of their entanglements, τobs, which fixes the timescale of the CRR relaxation. Therefore, the first objective of the present work is to discuss the different approaches proposed to determine this time and compare them to a large set of experimental viscoelastic data, either newly measured (poly(methyl-)methacrylate and 1,4-polybutadiene blends) or coming from the literature (polystyrene and polyisoprene blends). Based on this large set of data, it is found that with respect to the molar mass of the short chain matrix, τobs follows a power law with an exponent close to 2.5, rather than 3 as previously proposed. While this slight change in the power law exponent does not strongly affect the values of the constraint release times, the results obtained suggest the universality of the CRR process. Finally, we propose a new description of τobs, which is implemented in a tube-based model. The accurate description of the experimental data obtained provides a good starting point to extend this approach to self-entangled binary blends. Introduction The processes involved in the relaxation of the orientation of a monodisperse linear polymer are well identified and understood, based, for example, on the molecular tube picture proposed by Doi, Edwards and de Gennes [1,2]. However, the relaxation process of a long linear polymer moving in a shorter linear matrix is still under discussion [3,4]. Several mechanisms have been proposed to describe the relaxation of the long chains, among which the periodical loosening and reformation of the long chain entanglements involving a short chain (called short-long entanglements), which allows the long chains to further explore their surroundings and relax faster than in the monodisperse case [5][6][7][8]. This relaxation mechanism takes place all along the probe chains and is called Constraint Release (CR). It has been modelled by different approaches, such as the "self-consistent CR" [9], the "dynamic Tube dilation" [10], or the "Double reptation" models [11,12]. For bidisperse polymers containing long chains that are not or barely entangled with other long chains (i.e., long chains are self-unentangled), all the entanglements along the probe chains can be considered as short-long entanglements and the entanglement segments have a similar relaxation time,τ obs , which mostly depends on the relaxation time of the short chains. In cases where the short chain matrix is much shorter than the long probe Polymers 2023, 15, 1569 2 of 28 chains, it has been shown by Graessley [5,13] that it is faster for the long chains to fully relax through these constraint release events, rather than by reptation along their contour length. Consequently, it is assumed that the self-unentangled long chains are fully relaxing via a Rouse process, called the constraint release Rouse (CRR) process [3]. The corresponding terminal relaxation time of the long chains is their CRR time τ CRR,L , which depends on the average waiting time τ obs for a local CR-jump [3][4][5][6][7]14] (taking place on a distance equal to an entanglement segment), as well as on the number of entanglements along the probe chain Z L and is defined as: τ CRR,L is usually larger and never shorter than the intrinsic Rouse time of the long chains, τ R,L = τ e Z 2 L , where τ e is the intrinsic Rouse time of an entanglement segment. It must be noted, however, that the CR process of the long chains contains some non-Rouse features, as discussed in Refs. [3,14] by Watanabe and co-workers. In particular, the eigenfunctions of the chain motion are not following the sinusoidal functions expected for the CRR process. Nevertheless, following Refs. [4,[14][15][16][17], the Rouse dynamics can be used, in a first approximation, to describe the relaxation of the long chains diluted in a short chain matrix, as Equation (1) well fits the experimental data, and as the storage and loss moduli of the long component show a Rouse-like relaxation characterized by a power law of around 0.5 when plotted in respect to the angular frequency. While Equation (1) is well accepted to describe the relaxation time of the long chains, it raises three specific questions, which did not lead to a real consensus up to now and therefore require further investigation: (1) What is the exact criterion to determine if a long chain is slow enough compared to the short linear matrix to relax by CRR, (2) How to determine the value of τ obs and its dependence on the relaxation time of the short chain matrix, and (3) Is this relationship depending on the chemistry of the bidisperse blends? In order to address the first question and determine if the molar masses of the short and long components are separated enough to observe the CRR relaxation of the probe chains, Struglinski and Graessley proposed a criterion according to which the renewal of the tube due to the loss/renewal of topological constraints must be faster than the reptation of the long chains in their initial tube [13]: where Z S is the number of entanglement segments of the short chains, and the ratio r SG is the Struglinski-Graessley parameter. In this definition, the release time of a short-long entanglement segment is assumed to be equal to the reptation time of the short chains: τ obs = τ rept,S = 3τ e Z 3 S . While this criterion is still often used today, it has been shown both by diffusion experiments [18] and experimental data [3,14,19] that the critical value of r SG at which the CRR relaxation process takes place is, in reality, much shorter than 1. Indeed, diffusion experiments conducted by Green et al. [18] on long dilute deuterated polystyrene (with a molecular weight M L ) diffusing in polystyrene matrices of various molecular weight M S confirmed the scaling of r SG ∝ M L M 3 S but showed that the critical r SG value for which the matrix has an impact on the probe chain relaxation is not 1 but 1 α CR , with α CR being the number of local constraints per M e unit related to the efficiency of the CR process. Depending on the definition of the average molar mass between two entanglements M e , 1 α CR ≈ 0.1 (if we consider that the plateau modulus G e = ρRT M e ) or 0.064 (if G 0 N = ρRT M e = 4 5 G e ). This new criterion has been further tested by Park and Larson [19] within a reptation model on a representative set of polystyrene (PS), polyisoprene (PI), and 1,4-polybutadiene (PBD) bidisperse blends. They found that the critical value r SG = 0.1 could qualitatively predict whether the long probe chain would reptate in an undilated tube (r SG < 0.1) or in a dilated tube (r SG > 0.1) for samples with r SG within a factor 3 away Polymers 2023, 15, 1569 3 of 28 from 0.1, irrespective of the polymer chemistry, and therefore suggesting a universality of this critical value. At the same time, Watanabe et al. conducted an extensive study on PS [6] and PI [14] bidisperse blends and considered the experimentally observed release time of a short-long entanglement segment instead of the bare reptation time of the polymer matrix in the calculation of r SG . Based on this wide dataset, they concluded that the critical r SG value for PS blends is r SG ≈ 0.5, while its value for PI blends is, rather, r SG ≈ 0.2. This result questions the universality of a critical r SG value and suggests that entanglement dynamics are chemistry-dependent. More recently, Read et al. [4] proposed to account for the influence of contour length fluctuations (CLF) in the relaxation time τ d,S of the short chains, in order to establish a more precise expression of the Struglinski-Graessley parameter and defined r * SG as: In this expression, f (Z) represents the tube fraction relaxed by CLF at the time the chains relax by reptation [20]. Based on slip-spring simulation results, the authors found the critical value of this new parameter to be r * SG = 0.0254, i.e., much lower than the initial value of 0.1 established from diffusion experiments for r SG . Nevertheless, this new expression contradicts the scaling of r SG ∝ M L M 3 S found in the other studies. It seems therefore important to further investigate the validity of this criterion, based on a wide set of either new or existing experimental data. The relationship between τ obs and Z S is intimately related to the critical value of r SG or r * SG . As well illustrated in Ref. [3], based on experimental data, the waiting time for a local hop over a distance of an entanglement segment is not directly proportional to the relaxation time of the short linear matrix. While τ d,S ∝ M 3.5 S when accounting for the impact of contour-length fluctuations on reptation, diffusion experiments from Green et al. [18] and results of viscoelastic relaxation experiments on PS and PI bidisperse blends from Watanabe et al. [3] suggest that τ obs ∝ M α S with α ≈ 3 and much shorter than the disentanglement time of the short chain matrix. It must be noted, however, that the exact value of the exponent α is not known. For example, in Ref. [14], Sawaka et al. showed that the data available for PI blends do not allow for discrimination between α ≈ 3 and α ≈ 2.8, while it was concluded that α ≈ 2.3, as proposed in Ref. [21], could not be used to accurately describe the data. To explain this scaling, several physical pictures have been proposed, the first of which is based on the blob theory. Accounting for the number of effective constraints on each entanglement blob, Klein showed that many matrix chains penetrate an entanglement segment, each of them being capable of activating constraint release events [22]. Therefore, the number of constraint release events is enhanced by a factor 1/ √ M S , leading to τ obs ∝ More recently, Shivokhin et al. [15] and Read et al. [4] calculated τ obs from slip-link simulations and found an empirical expression to determine its value as a function of τ d,S : The authors first obtained τ obs = 0.047τ d,S from Equation (4), but corrected this expression by accounting for the influence of chain friction on the constraint release hop distance: in case of a fast constraint release event, the long probe chain cannot move a significant distance, therefore decreasing the hop length. Consequently, in the case of an intermediate matrix chain length, Equation (4) leads to τ obs ∝ M 3 S (rather than τ obs ∝ M 3.5 S for very long matrix chains and τ obs ∝ M 1 S for short matrix), as expected from experimental data. This expression is interesting, as it suggests that the exponent α can vary depending on the range of molar masses considered for the short chain matrix. However, while this predicts satisfactorily the value of τ obs for self-unentangled PI bidisperse samples, it remains difficult to explain Equation (4) in the framework of the tube model. On the other hand, Ebrahimi et al. [16], Lentzakis et al. [23], and Yan et al. [17] proposed a simpler scaling based on the blob picture, experimentally validated for PI, PBD, and poly(hydroxybutyrate) (PHB) star/linear blends, as well as for H/linear and comb/linear blends: The authors justified Equation (5) by noticing that when a short matrix chain is relaxed at the time τ d,S , all of its Z S entanglements are relaxed. Therefore, releasing the constraint imposed by a single entanglement only requires the local motion of the chain at the scale of an entanglement blob. However, as mentioned in Ref. [23], it is not clear if this relationship stays valid in the case of a well entangled short chains matrix (Z S > 13), as it has not been tested in this regime where much larger differences between Equations (4) and (5) appear. Thus, from the above, it is clear that the relationship between τ obs and τ d,S needs to be further investigated, using a larger range of matrix molar masses. Finally, Watanabe and co-workers have shown that, assuming τ obs = KM 3 S , different values of the proportionality factor K had to be used, depending on the polymer chemistry, in order to describe the experimental data. In particular, it was found that the K constant for PI bidisperse blends is twice as small as the K constant for PS blends [3,14]. This result suggests that the universal behavior of the polymer melts is lost, as their normalized viscoelastic properties cannot be expressed as only a function of their number of entanglement segments and material parameters. It seems therefore important to further investigate this result for polymer blends of other chemistries. In order to address these questions, the first objective of the present work is to further test and discuss the different scaling that has been proposed to determine τ obs and its dependency on the polymer chemistry, based on new poly(methyl methacrylate) (PMMA) and 1,4-polybutadiene (PBD) bidisperse blends with self-unentangled (or very poorly entangled) long chains, in order to complement the data of PS [24][25][26][27] and PI [14,[28][29][30] blends available in the literature. Based on the CRR time of the long chains, as well as τ obs determined from these data, we would like to discuss the relationship between τ obs and the number of entanglements of the short chain matrix Z S and the value of the Struglinski-Graessley parameter [3][4][5]13,31]. Our second objective is to propose a simple expression to determine τ obs that can easily be incorporated in a CRR model and to validate it by comparing the theoretical results to the viscoelastic data of dilute binary blends of various chemistries. The manuscript is organized as follows: in Section 2, a model is proposed to account for the CRR process underwent by a long chain well diluted in a short linear matrix. Section 3 presents the PMMA and PBD samples measured in this work, as well as all the other blends found in the literature, with self-unentangled long chains. In Section 4, the values of τ obs are first extracted from linear viscoelastic data of those samples and then discussed in relation to the molar mass and relaxation time of the short chain matrix. The influence of the polymer chemistry, i.e., of PS, PI, PMMA, and PBD samples, is also investigated. Based on these results, a critical value for the Struglinski-Graessley criterion is proposed. Then, in Section 5, we include the CRR process in a tube model and validate it with the experimental data. Finally, conclusions are presented in Section 6. Description of the CRR Model In this section, we focus on modeling the linear viscoelastic behavior of bidisperse linear blends in which the long chains are self-unentangled and the molar masses of the long and short components are well separated, such that the Struglinski-Graessley criterion is fulfilled. Under these conditions, one can assume that the long chains only relax by a CRR process governed by the time of the release/formation of the short-long entanglements, τ obs [6][7][8]. Thus, as illustrated in Figure 1, two Rouse processes are observed in the relaxation of the long chains. First, at very short times, the probe chain relaxes its orientation by successive (intrinsic) Rouse relaxation modes p involving longer and longer molecular segments. However, at time t = τ e , at which the chain is relaxed at the length scale of the entanglement segments of mass M e (see Figure 1a), intrinsic Rouse relaxation cannot take place anymore, due to the entanglement constraints imposed on the chain. Since τ e corresponds to the relaxation time of the mode p = Z L (the number of entanglements along the long probe chain), the first component G R,L (t) of the viscoelastic relaxation modulus of the probe chain corresponding to this intrinsic Rouse relaxation process can be expressed as the sum of the contribution of each Rouse mode [20]: where υ L is the weight fraction of the long chains, ρ is the polymer density, T the temperature, R the universal gas constant, and M L the molar mass of the polymer. Description of the CRR Model In this section, we focus on modeling the linear viscoelastic behavior of bidisperse linear blends in which the long chains are self-unentangled and the molar masses of the long and short components are well separated, such that the Struglinski-Graessley criterion is fulfilled. Under these conditions, one can assume that the long chains only relax by a CRR process governed by the time of the release/formation of the short-long entanglements, . [6][7][8]. Thus, as illustrated in Figure 1, two Rouse processes are observed in the relaxation of the long chains. First, at very short times, the probe chain relaxes its orientation by successive (intrinsic) Rouse relaxation modes p involving longer and longer molecular segments. However, at time t = , at which the chain is relaxed at the length scale of the entanglement segments of mass (see Figure 1a), intrinsic Rouse relaxation cannot take place anymore, due to the entanglement constraints imposed on the chain. Since corresponds to the relaxation time of the mode = (the number of entanglements along the long probe chain), the first component , ( ) of the viscoelastic relaxation modulus of the probe chain corresponding to this intrinsic Rouse relaxation process can be expressed as the sum of the contribution of each Rouse mode [20]: where is the weight fraction of the long chains, is the polymer density, the temperature, the universal gas constant, and the molar mass of the polymer. (a) Rouse relaxation modes of a self-unentangled long probe chain ( < 2) diluted in a short chain matrix. The relaxation starts from the mode = and is followed by slower modes, Figure 1. (a) Rouse relaxation modes of a self-unentangled long probe chain (υ L Z L < 2) diluted in a short chain matrix. The relaxation starts from the mode p = N and is followed by slower modes, down to the mode p = Z L corresponding to the entanglement segments (blue blobs). The latter are relaxed at time t = τ e , after which the intrinsic Rouse relaxation is stopped. (b) CRR relaxation process taking place for all the modes 1 ≤ p < Z L (the blue blobs represent one of these modes). This process starts at time t = τ obs and ends at time t = τ obs Z 2 L , which corresponds to the relaxation of the whole chain. As mentioned in the Introduction, the relaxation of molecular segments longer than the entanglement segments can only take place at times longer than the release time of the short-long entanglements, τ obs (with τ obs ≥ τ e ). Thus, no relaxation takes place between τ e and τ obs , and the CRR time of the whole chain, which evolves at the rhythm of the short-long entanglements disentanglement/re-entanglement, is equal to τ CRR,L = τ obs Z 2 L , rather than its intrinsic Rouse time, τ R,L = τ e Z 2 L . It must be noted, however, that long chains diluted in an oligomeric matrix (such that τ d,S < τ e ) fully relax by their intrinsic Rouse process (thus τ obs = τ e and τ CRR,L = τ R,L ). Accounting for this condition, the CRR process of self-unentangled long chains can be approximated as [32]: Then, combining Equations (6) and (7), the relaxation modulus of the long chains is well described as: In the case of self-entangled long chains, the CRR process stops once the long-long entanglements (of mass M e /υ L ) are relaxed, i.e., at time t = τ obs /υ 2 L . This relaxation time corresponds to the CRR mode p = υ L Z L . The relaxation of the longer modes (1 ≤ Z L < υ L Z L ) takes place via other relaxation processes, such as the reptation and contour length fluctuations. In this study, the long probe chains in some bidisperse samples are barely entangled with other long chains (1 < υ L Z L < 3), leading to a very small portion of probe chain actually constrained by long-long entanglements. In the model, we neglect these long-long entanglements and assume that the relaxation of these poorly self-entangled long chains is fully described by a CRR process. As discussed in Section 5, this assumption can lead to a slightly underestimated terminal relaxation time for specific blends. The relaxation modulus of the blend also includes the contribution from the short chain matrix, G S (t). To determine the latter, we assume that the disorientation processes of the short matrix chains in the blend are unaffected by the presence of the long chains, i.e., the viscoelastic relaxation modulus of the short chain matrix in the blend is the same as in the monodisperse state. Such an assumption is justified, since the concentration of the long component is very small. G S (t) is calculated as explained in detail in Ref. [33]; if the short chains are unentangled (M S ≤ 2M e ), the matrix fully relaxes by a Rouse process: where υ S represents the weight fraction of the short chains. If the short chains are entangled, their relaxation modulus is determined based on the simplified time marching algorithm (TMA) [34,35]: where ϕ S (t) is the survival fraction of the initial entanglement segments along a short chain, considering that the chains can relax by reptation and contour length fluctuations, and the function Φ TB,e f f (t) accounts for both the tube dilation factor (when Φ TB,e f f (t) ≤ 1) and the intrinsic Rouse relaxation of the entanglement segments (Φ TB,e f f (t) ≥ 1 for t ≤ τ e ) [33]: The viscoelastic modulus of the whole sample finally results from the sum of the contributions of the long and short chains: The complex viscoelastic spectrum G * (ω) is then obtained from G blend (t) using the Schwarzl equations [36]. Validity of the CRR Model While similar approaches have already been used in previous works and showed good agreement with the data, it is important to note that Equation (6) is derived from the assumption that the CRR process taking place at the local level does not depend on the global motion of the chain, which is most likely not the real case. Furthermore, as detailed in Refs. [3,14,[37][38][39], the CR-Rouse feature of the chain motion is only valid if we can consider that the segments between two entanglements always have the same length during the chain relaxation. Indeed, if the number of monomers between two entanglements varies, this directly leads to a tension-equilibration process, i.e., to a transfer of monomers along the contour length, which speeds up the relaxation of the entanglement segments located near to the chain ends. As discussed in Ref. [37], this may be the reason why non-Rouse features are observed in the dielectric data of dilute PI blends. This possible faster relaxation of the chain ends is not accounted for in the present approach. It is, however, expected that this should not significantly affect the predicted curves, as the tension equilibration is a global process that involves the motion of the whole chain and is therefore rather slow in comparison to the local CR-Rouse equilibration of the chain. Another issue, which has been raised in Ref. [14], is the validity of the CRR model at short times, t < τ d,S . As it is shown in Section 4, the average release time of a short-long entanglement segment, τ obs , is much shorter than the average relaxation time of the short chain matrix. Therefore, at the time at which we consider the relaxation of the faster modes of the CRR process to take place, a fraction of the initial short-long entanglements are still existing, preventing the local equilibration of the long chains. However, at long times, this process is averaged, and the proposed assumption is acceptable. Deviations are thus expected at short times, and, in particular, between τ obs and τ d,S . This point is further investigated in the Supplementary Information (see Figure S4). Bidisperse Blends Composed of Self-Unentangled Long Chains PMMA bidisperse blends: Poly(methyl methacrylate) of various molar mass, with a high syndiotactic ratio (>79%) and low polydispersity index (PDI), were commercially obtained from Polymer Source, Inc. (Montreal, QC, Canada). The weight average molar mass M of the materials has been measured with size-exclusion chromatography (SEC column by Agilent, Santa Clara, CA, USA) and the T g of the monodisperse samples has been determined by differential scanning calorimetry (DSC) with a standard Heat-Cool-Heat procedure (heating rate of 10 K/min under inert atmosphere), in a Q2000 instrument (TA instruments, New Castle, DE, USA). The main characteristics of the blends, as the molecular weight of the long chain M L and its weight fraction υ L , as well as of the short chain matrices (their molecular weight M S , polydispersity index (PDI), and T g ), are given in Table 1. The bidisperse PMMA blends composed of 2 or 3 wt% of PMMA234 were prepared either by precipitation for high molecular weight matrices (PMMA27, PMMA35, and PMMA60) or by dilution for shorter matrices. To this end, the components of the blend were first weighed to obtain the desired weight fraction and dissolved together in tetrahydrofuran (THF, purchased from Merck KGaA, Darmstadt, Germany) to obtain a concentrated polymer solution of 30 mg/mL. The mixture was then stirred slowly overnight at room temperature. The solutions containing the high molecular weight matrices were precipitated drop by drop in a large amount of methanol under continuous stirring. After filtration, the obtained powder was dried in a vacuum oven set at 70 • C for 5 days to remove residual solvent. The other solutions were poured into a form made of thick aluminum foil and covered with a thin perforated aluminum foil. They were then left to dry in a fume hood until a solid film had formed (>24 h). Flakes of this thin film were then placed to dry in the vacuum oven set at 50 • C for 7 days to remove remaining solvent. The weight fraction of each blend was verified by SEC and are listed together with the blend molecular characteristics in Table 1. Dilute bidisperse PBD blends were prepared from these samples, and their molecular characteristics are listed in Table 2. PBD254 was first diluted in THF to a concentration of 2 mg/mL, and an appropriate amount of this solution was dropped in solutions of 200 mg/mL of PBD matrices in THF to obtain the desired blend weight fraction. An antioxidant (butylated hydroxytoluene, purchased from Merck KGaA, Darmstadt, Germany) was added in a small amount to each solution (~0.5 wt%) prior to manual stirring. The PBD blends were then left to dry under increasing vacuum conditions, at room temperature and in the dark, for 9 to 12 days. Bidisperse samples from the literature: A wide set of linear viscoelastic data of linear bidisperse blends of PS and PI (with similar cis-1,4:trans-1,4:3,4 ratio) with not or poorly self-entangled long chains (υ L Z L < 3) has been studied in the literature. These samples are listed in Tables 3-6, depending on the sample chemistry. Main characteristics and corresponding references are given. For each sample, when available, the value of the glass transition temperature (T g ) that has been used in the original reference is given. When needed, the theoretical value of T g , determined based on the Fox-Flory Equation (see Section 4.1), is also shown. Regarding PBD samples from the literature, full viscoelastic relaxation curves were found for only three sets of bidisperse linear blends containing self-unentangled long chains, PBD550-20 [40], PBD208-15, and PBD412-15 [41] (see Table 5). For these last two samples, only the contribution of the long chain to the viscoelastic modulus is reported. On the other hand, an extensive study from Wang et al. [41] on linear PBD bidisperse blends provides the zero-shear viscosity data of many other sets of samples. These samples are also used in Section 4, and their characteristics are listed in Table 6. Linear Viscoelastic Measurements Prior to the linear viscoelastic measurement, PMMA samples were dried under vacuum overnight and molten, pressed and annealed under vacuum at T = T g + 70 • C in disks of 5.5 mm diameter and 1.4 mm height, yielding to 0.6 mm thick 8 mm disks, while the PBD samples were loaded at room temperature on the 8 mm plate to obtain a thickness between 0.5 and 1 mm between both plates. Upon a progressive decrease of temperature, the gap between both plates was manually decreased to ensure full contact between both plates and the sample. The small amplitude oscillatory shear behavior of these samples was measured on an MCR 301 rheometer (Anton Paar GmbH, Graz, Austria) for PMMA and on an AR-2000 rheometer (TA Instruments, New Castle, DE, USA) under N 2 atmosphere for PBD. A stainless steel 8 mm parallel plate geometry was used, with a convection oven for temperature control. For each sample, an amplitude sweep was performed before each measurement to determine the linear region and choose the appropriate imposed deformation (between 0.1 and 10%). Frequency sweeps were then performed between 200 and 0.01 rad/s at temperatures ranging from 200 to 120 • C for PMMA samples and between 100 and 0.01 rad/s at temperatures ranging from 40 to −80 • C for PBD samples. To avoid crystallization, PBD samples were regularly brought back to 30 • C for a few minutes before conducting measurements at the lowest temperatures. Linear Viscoelastic Data First, the master-curves built for the PMMA samples are shown in Figure 2. Since the samples containing the shortest matrices (PMMA3 and PMMA15) have a lower T g , the reference temperature of the PMMA blends has been adjusted to ensure iso-T g conditions, T re f = T g + 60 • C (see Table 1). Utilizing the appropriate reference temperatures, the shift factors of the different samples follow the same WLF Equation (13) Regarding the PBD samples, the frequency-dependent storage and loss moduli obtained at different temperatures were horizontally shifted to a reference temperature T re f = −50 • C. Results are shown in Figure 3 for both the monodisperse samples and the corresponding blends. The shift factors used to build the master-curves are shown in the insert of the figure. It is observed that all master-curves superimpose well at high frequencies, confirming that the samples all have a similar glass transition temperature. Moreover, the shift factors used for building the different master-curves follow well the William-Landel-Ferry (WLF) equation [42,43]: with c 1 = 6.66 and c 2 = 93.9 • C at T re f = −50 • C. For both sets of samples, it is observed that the influence of the few percent of the long component on the viscoelastic response of the short chain matrix is negligible, as it has been assumed in the model. It is also observed that, after the relaxation of the short chain matrix, the storage modulus of most of the blends decreases with a slope of ½, which well corresponds to a CRR regime, as further discussed in Section 4.3. In order to analyze and compare the linear viscoelastic data of the blends coming from the literature (see Tables 3-6), the viscoelastic curves for each different set of samples were shifted at the same distance from their glass transition temperature ( ), to ensure that they are all characterized by the same Rouse time of an entanglement segment, [44]. Since, for the PS samples (see Table 3), the value of was not given, we used the Fox-Flory equation to determine their value for the short chain matrices [45]: where , = 106.6 °C is the glass transition temperature of an ultra-high molecular weight PS polymer, and = 1.1 10 5 g/mol for PS [45]. Then, the glass transition temperature of the blends was determined as: For both sets of samples, it is observed that the influence of the few percent of the long component on the viscoelastic response of the short chain matrix is negligible, as it has been assumed in the model. It is also observed that, after the relaxation of the short chain matrix, the storage modulus of most of the blends decreases with a slope of 1 2 , which well corresponds to a CRR regime, as further discussed in Section 4.3. In order to analyze and compare the linear viscoelastic data of the blends coming from the literature (see Tables 3-6), the viscoelastic curves for each different set of samples were shifted at the same distance from their glass transition temperature (T g ), to ensure that they are all characterized by the same Rouse time of an entanglement segment, τ e [44]. Since, for the PS samples (see Table 3), the value of T g was not given, we used the Fox-Flory equation to determine their value for the short chain matrices [45]: where T g,∞ = 106.6 • C is the glass transition temperature of an ultra-high molecular weight PS polymer, and M re f = 1.1 × 10 5 g/mol for PS [45]. Then, the glass transition temperature of the blends was determined as: where the longest blend component can be assumed to be long enough to have T g,L = T g,∞ . Knowing the T g value of the blends, the horizontal shift to apply to the data in order to compare them at iso-T g was determined based on the WLF equation. For the PS samples, the constants c 1 and c 2 were fixed to c 1 = 6.74, c 2 = 133.6 • C for PS at (T re f − T g = 60.4 • C, following Refs. [24,25]. A similar approach was applied to the PBD samples of Ref. [41], which were all measured at 40 • C (see Table 6). The appropriate T re f was evaluated from the values of T g reported for the monodisperse samples and from Equation (15), and we used c 1 = 3 and c 2 = 180 • C at (T re f − T g = 142 • C from Ref. [46] to shift the values of the zero-shear viscosity at iso-T g conditions, knowing that (13)-(15) are empirical, the validity of the shifting has been checked, based on the rheological curves. For the PS samples, it was found that the complex moduli well superimpose in the high frequency Rouse regime, as illustrated in the Supplementary Information (see Figures S1 and S2) for the monodisperse matrices and some of the blends. It must be noted that for samples PS2810-23.4 and PS2810-39 (see Table 3), a better agreement at high frequency was found by using the theoretical T g values instead of the experimental data. For the PBD blends of Ref. [41], the shifting could not be validated because the storage and loss moduli are not reported. This uncertainty must be taken into account in the analysis of this data. Material Parameters In order to analyze and model the viscoelastic data, first one needs to determine the material parameters, i.e., the molar mass of an entanglement segment M e , its intrinsic Rouse time τ e , and the plateau modulus G 0 N . These parameters are listed in Table 7. They have been chosen in order to best fit with TMA experimental data of the monodisperse samples investigated in this work. It should be noted that the parameters employed to model the PI samples at 40 • C are the same as those used in Ref. To confirm the values of the entanglement molecular weight M e chosen in Table 7 for the datasets considered in this article, we follow the method described in Ref. [47] and compare the viscoelastic data normalized by G 0 N and plotted against ωτ e of at least one monodisperse sample from each set of blends [14,[24][25][26][27][28][29][30]40,41] to another data set with a different polymer chemistry but that supposedly has the same number of entanglements Z. Assuming that the normalized linear viscoelastic properties only depend on the number of entanglements, G and G should superimpose onto a single curve for samples with the same Z. This is indeed verified, as demonstrated in Figure 4, for Z = 3, 4, 5, 6, 9, 11, 12, and 26. Assuming that = 14.00 / l , these equations lead to = 3.54 /mol , ≈ 7.00 /mol and ≈ 1.69 /mol, which is close to the values proposed in Table 6. Determination of the CRR Time of the Long Chains, τCRR,L In order to determine the value of , we determine , from the experimental storage and loss moduli of the blends, ( ) and ( ), following Ref. [3]. This requires first removing the short chain matrix contribution: Table 6. Determination of the CRR Time of the Long Chains, τ CRR,L In order to determine the value of τ obs , we determine τ CRR,L from the experimental storage and loss moduli of the blends, G blend (ω) and G blend (ω), following Ref. [3]. This requires first removing the short chain matrix contribution: G L (ω) = G blend (ω) − υ S G S,mono (ω) (18) where G S,mono (ω) and G S,mono (ω) are the experimental storage and loss moduli of the matrix in the monodisperse state. Then, assuming that the long chains relax by a CRR process, the CRR time of the long component is determined from the following lowfrequency limit: In order to validate the values found for τ CRR,L with this equation, we compare in Figure 5 the contribution of the long chains to the storage modulus of the blends, G L (see Equation (17)), vertically shifted by a factor υ L ρRT M L and horizontally shifted by a factor π 2 30 τ CRR,L . The data corresponding to the blends PI308-94, PI626-179, and PS316(10 wt%)-89 have been removed, as their long component do not relax by a CRR process (see Section 5). Despite some data scattering observed at high frequency (resulting from the removal of the matrix contribution obtained from experimental data), the low frequency data well superimposes for all blends, and a Rouse-like relaxation is observed, immediately followed by the terminal regime of relaxation. Moreover, their terminal regime well follows the theoretical curves corresponding to τ CRR,L = 1 (and assuming that G(t) = ∑ p e −2p 2 t ; see the black curves). This confirms that the long linear chains are relaxing via a CRR process. The viscoelastic data also confirms the CRR relaxation of the long chain for PBD208-15 and PBD412-15 diluted at 2 or 0.5-1 wt%, respectively, (see Table 5) and measured by Wang et al. [41] at 40 • C, as can be seen in Figure 5a. Indeed, these data superimpose well with the series of PBD254 diluted in various matrices and measured at −50 • C. This suggests that the blends composed of shorter matrices should also relax by a CRR-like motion and can be considered in our analysis of the CRR time. where , ( ) and , ( ) are the experimental storage and loss moduli of the matrix in the monodisperse state. Then, assuming that the long chains relax by a CRR process, the CRR time of the long component is determined from the following low-frequency limit: In order to validate the values found for , with this Equation, we compare in Figure 5 the contribution of the long chains to the storage modulus of the blends, (see Equation (17)), vertically shifted by a factor and horizontally shifted by a factor , . The data corresponding to the blends PI308-94, PI626-179, and PS316(10 wt%)-89 have been removed, as their long component do not relax by a CRR process (see Section 5). Despite some data scattering observed at high frequency (resulting from the removal of the matrix contribution obtained from experimental data), the low frequency data well superimposes for all blends, and a Rouse-like relaxation is observed, immediately followed by the terminal regime of relaxation. Moreover, their terminal regime well follows the theoretical curves corresponding to , = 1 (and assuming that ( ) = ∑ ; see the black curves). This confirms that the long linear chains are relaxing via a CRR process. The viscoelastic data also confirms the CRR relaxation of the long chain for PBD208-15 and PBD412-15 diluted at 2 or 0.5-1 wt%, respectively, (see Table 5) and measured by Wang et al. [41] at 40 °C, as can be seen in Figure 5a. Indeed, these data superimpose well with the series of PBD254 diluted in various matrices and measured at −50 °C. This suggests that the blends composed of shorter matrices should also relax by a CRRlike motion and can be considered in our analysis of the CRR time. In the case of the PBD blends presented in Table 6, as only the zero-shear viscosity data are available, [41] the value of τ CRR,L is determined based the following approximation (after removal of the matrix contribution from the experimental data, and under iso-T g condition): This method is, however, more approximate than the former one, as it involves the sample density, ρ, which is not accurately known [47] and is dependent on the assumption that the long chain fully relaxes by a CRR motion. However, this assumption cannot be validated because the storage and loss moduli data are not available. Nevertheless, it can be noted that if the long chains relax only by a CRR process, their corresponding relaxation time τ CRR,L should not depend on the weight fraction of the long chains, υ L . This was, indeed, observed (see Figure 6), within a ±10% difference for τ CRR,L , which supports this method. In the case of PBD208-15 and PBD412-15, we determined τ CRR,L both from the long chain contribution to the viscoelastic relaxation modulus and from the zero-shear viscosity. Comparison between the values obtained with the two methods led to similar results (within 20% uncertainty). In Figure 6, it is also observed that the data could slightly differ from the scaling proposed, ∝ [3,14,22], showing that a lower dependence on could better describe the whole range of data, as already noted in Ref. [3]. It must be noted that the curves shown in Figure 6 depend on the choice of the material parameters, which may affect their comparison. In order to avoid this source of uncertainty, we compare the behavior of the series of PS2810-Matrix, PI626-Matrix, PMMA234-Matrix, and PBD254-Matrix blends solely based on experimental data. The blends available in each of these series have the specificity to be composed of the same proportion of long chains diluted in matrices of various length. These long chains are all relaxing by CRR (see Figure 5). Therefore, as shown in Figure 7 . Therefore, if we consider that both samples are relaxing by a CRR process, the factor is equal to the ratio between , Figure 6. Normalized τ obs data versus Z S for all available datasets for PS (green +), PI (red ∆), PMMA (orange ), and PBD (blue ), compared to Equations (4) (black continuous line) and (5) (black dashed line) and the scaling τ obs /τ e = K W Z 3 S (grey dotted lines). Relationship between τ obs and Z S The release time of a short-long entanglement segment, τ obs , is determined from τ CRR,L and Z L (see Equation (1)), with Z L = M L /M e (see Table 7). Their values, normalized by τ e , are shown in Figure 6, in respect to Z S (independently of the concentration, as all samples can be considered diluted), to assess the validity of the existing relationships presented in Section 1. We first observe that within the experimental scatter, it seems that all the data follow the same curve, including the τ obs parameter calculated from the zero-shear viscosity data. In particular, while the value of τ obs does not depend on the molar mass of the long chains, which is in agreement with the well-established M 2 dependence of the CRR time, no significant difference appears between the different polymer chemistries. It should be noted, however, that we cannot exclude a slightly different behavior of the PI blends. Indeed, if we assume that τ obs /τ e = K W Z α S with α = 3, as proposed in Ref. [3], we find that K W PS = K W PMMA = K W PBD =0.075, while K W PI = 0.050, i.e., a factor 1.5 lower than for the other chemistries, in agreement with Ref. [3], in which a factor 2 was found between the CRR times of these PS and PI samples (the difference between the 1.5 and 2 factors is attributed to the influence of the materials parameters considered here to determine τ obs ). In Figure 6, it is also observed that the data could slightly differ from the scaling proposed, τ obs ∝ Z 3 S [3, 14,22], showing that a lower dependence on Z S could better describe the whole range of data, as already noted in Ref. [3]. It must be noted that the curves shown in Figure 6 depend on the choice of the material parameters, which may affect their comparison. In order to avoid this source of uncertainty, we compare the behavior of the series of PS2810-Matrix, PI626-Matrix, PMMA234-Matrix, and PBD254-Matrix blends solely based on experimental data. The blends available in each of these series have the specificity to be composed of the same proportion of long chains diluted in matrices of various length. These long chains are all relaxing by CRR (see Figure 5). Therefore, as shown in Figure 7 for each series, it is possible to horizontally shift the storage modulus of the blends, G blend (ω), by a factor λ, in order to overlap the terminal regime of a specific blend chosen as reference and containing a short chain matrix of mass M S,re f . The terminal relaxation time τ d,blend of the blend can thus be expressed as a function of the terminal relaxation time of the reference blend, τ d,blend re f and the shift factor, as τ d,blend = λτ d,blend re f . Therefore, if we consider that both samples are relaxing by a CRR process, the factor λ is equal to the ratio between τ CRR,L and τ CRR,L re f , the CRR time of the long chain in the reference blend. Or, equivalently, λ is equal to the ratio τ obs τ obs,re f . Figure 7e shows the values of λ used to shift G blend (ω) for each chemistry as a function of the ratio between M S and M S,re f . One can observe that for all the series, the data follow the same trend, with λ scaling with M S M S,re f α . Fitting with a linear regression, the value of α for each chemistry leads to α PS ≈ 2.32, α PI ≈ 2.55, α PBD ≈ 2.51 and α PMMA ≈ 2.3. Therefore, within experimental uncertainty, the value of lambda seems to be well described for all chemistries with α = 2.5 (continuous black line) rather than with α = 3 (dotted grey line). From this result, which is based only on experimental data, it is thus concluded that τ obs ∝ Z 2.5 S . If the CRR time is considered to be proportional to Z 2.5 S Z 2 L , one should find that a universal behavior of the long chain relaxation is recovered, whatever the sample chemistry might be. Since several blends contain a short chain matrix with the same number of entanglements (see Figure 4), we can further check this behavior by looking at their storage modulus normalized by υ L ρRT M L (see Equation (7)) as a function of ωτ e Z 2 L . This way, the terminal relaxation time of the normalized curves is equal to K W Z α S and, thus, only depends on K W , since Z S is similar for all blends. The good overlap of the curves in the terminal regime (despite the small differences in the values of Z S ) shown in Figure 8 seems to confirm this universal behavior. Moreover, among these blends with the same Z S , the blends PS2810-72.4 and PI626-17.6 also share the same Z L . In such a case, the data do not have to be normalized by the number of entanglement segments of the long chain to be compared, and, according to the universal behavior of the samples, the terminal regime of the normalized storage moduli G υ L G 0 N as a function of ωτ e should superimpose. As shown in the Supplementary Information (see Figure S3), this is indeed observed. We therefore conclude that the constraint release Rouse time of long linear chains diluted in a short chain matrix seems to be fully described by the material parameters used in tube models, i.e., G 0 N , M e and τ e . to be compared, and, according to the universal behavior of the samples, the terminal regime of the normalized storage moduli as a function of should superimpose. As shown in the Supplementary Information (see Figure S3), this is indeed observed. We therefore conclude that the constraint release Rouse time of long linear chains diluted in a short chain matrix seems to be fully described by the material parameters used in tube models, i.e., , and . In Figure 6, the predictions of obtained with Equations 4 and 5 are also presented [4,16]. While Equation (5) does not predict the correct evolution of for matrices with a larger number of entanglements, the curve predicted by Equation (4) is close to the values of obtained for the PI blends. However, Equation (4) underestimates the value of for the other polymer chemistries. It seems therefore important to further investigate the relationship between and the relaxation time of the matrix, , . In Figure 6, the predictions of τ obs obtained with Equations (4) and (5) are also presented [4,16]. While Equation (5) does not predict the correct evolution of τ obs for matrices with a larger number of entanglements, the curve predicted by Equation (4) is close to the values of τ obs obtained for the PI blends. However, Equation (4) underestimates the value of τ obs for the other polymer chemistries. It seems therefore important to further investigate the relationship between τ obs and the relaxation time of the matrix, τ d,S . Relationship between τ obs and Z S In order to determine τ obs as a function of τ d,S , the relaxation time of the short chain matrix should first be accurately determined. For a monodisperse linear polymer with Z entanglements, Likhtman and McLeish established from simulation data that the final relaxation time of the probe chain can be obtained from its reptation time by including the effect f (Z) of contour length fluctuations, such that [20]: with: On the other hand, the relaxation time of the matrices can be experimentally deter- ωG S (ω) [3]. As shown in Figure 9, a very good agreement is found between these theoretical and experimental times for all the PS, PI, PMMA and PBD monodisperse samples. Furthermore, when the relaxation times are normalized by τ e and the molar mass by M e , all the data collapse into a master-curve, which further validates the values taken for these two material parameters. On the other hand, the relaxation time of the matrices can be experimentally determined by < > = lim → ( ) ( ) [3]. As shown in Figure 9, a very good agreement is found between these theoretical and experimental times for all the PS, PI, PMMA and PBD monodisperse samples. Furthermore, when the relaxation times are normalized by and the molar mass by , all the data collapse into a master-curve, which further validates the values taken for these two material parameters. As expected, using these experimental data to plot in function of , (see Figure 10a), it is observed that the data do not collapse onto a master-curve. However, if we consider that the release time of a short-long entanglement segment scales as: all the data superimpose into the same line of slope 1 (see Figure 10b). [14,[24][25][26][27][28]30,40,41] and measured in this article, obtained [3], as a function of their number of entanglements, compared to the predictions of Equation (21) (black curve) and to the approximation τ d /τ e ∼ 0.14Z 3.5 (dashed red curve). As expected, using these experimental data to plot τ obs in function of τ d,S (see Figure 10a), it is observed that the data do not collapse onto a master-curve. However, if we consider that the release time of a short-long entanglement segment scales as: all the data superimpose into the same line of slope 1 (see Figure 10b). This result further confirms that ∝ . . Indeed, within the range of molar masses investigated, the relaxation time of the entangled short matrices is well approximated by (see Figure 9): Combining Equations 23 and 24, we therefore obtain: ~ 0.14 . This result further confirms that τ obs ∝ Z 2.5 S . Indeed, within the range of molar masses investigated, the relaxation time of the entangled short matrices is well approximated by (see Figure 9): Combining Equations (23) and (24), we therefore obtain: where K is a proportionality constant, which seems independent of the polymer chemistry according to the good superposition of all sets of data on Figure 10b. can be explained as follows: a short chain cannot diffuse freely, by a Rouse process, since it is entangled. However, if we would assume that the short chain could move freely in all directions, one can determine an equivalent , taken by the chain to diffuse along the tube axis and fully relax, is equivalent to the time it would take if we assume that the chain diffuses freely over a distance equal to its end-to-end distance, i.e., τ d,S = . Considering that the constraint release time of an entanglement segment corresponds to the time the chain takes to freely explore the blob of an entanglement segment, we find: . Finally, in order to ensure that this release time is never shorter than the intrinsic Rouse time of an entanglement segment, τ e , the release time of a short-long entanglement segment is defined as: Based on the experimental data, the constant K, which is related to the efficiency of the constraint release process, was fixed to 1.4, which well agrees with the results obtained based on the slip-spring model 4 . In Equation (26), the condition Z S ≥ 2 accounts for the limiting case in which the polymer matrix is not entangled. Figure 11 shows the comparison between this equation (continuous black line) and the experimental data. Critical Value of the Struglinski-Graessley Criterion for Dilute Binary Blends In this Section, the critical value of the Struglinski-Graessley criterion, which determines the limit between relaxation via full CRR-like motion and relaxation by reptation, is discussed, based on new experimental data as well as on data available in the literature. Critical Value of the Struglinski-Graessley Criterion for Dilute Binary Blends In this Section, the critical value of the Struglinski-Graessley criterion, which determines the limit between relaxation via full CRR-like motion and relaxation by reptation, is discussed, based on new experimental data as well as on data available in the literature. To this end, we first recast all proposed critical values of r SG in the frame of the new criterion r * SG proposed by Read et al. (see Equation (3)), in order to account for the influence of contour length fluctuations on the short matrix reptation [4].Consequently, the criterion proposed by Park and Larson, [19] r SG = Z L Z 3 S > 0.1, can be re-written as: while according to the criterion proposed by Read et al. [4]: These two criteria are compared to the experimental data in Figure 12. The different symbols are used to differentiate the blends for which the long chains were found to fully relax by CRR (+ symbols) (for which normalized G L (ω) follow a Rouse-like relaxation on a wide frequency range, or as stated in the literature [5,14]) or not (o symbols). It is seen that the criterion of Park and Larson results in a good description of the data, in spite of disregarding contour length fluctuations. Similar results are expected, based on the criterion proposed by Watanabe and co-workers [3,6,14], as it is based on the same scaling, the only difference being the presence of a pre-factor in the equation to account for the different chemistries. From Figure 12, one cannot conclude, however, that the CRR limit depends on the polymer nature. On the other hand, the limiting value proposed by Read et al. [4] underestimates the limit between reptation and CRR-like motion. However, it is interesting to note that if is used instead of , to describe the CRR time in the criterion, i.e.,: , > 1 (28) the combination of this condition with the definition of proposed by the authors (see It is seen that the criterion of Park and Larson results in a good description of the data, in spite of disregarding contour length fluctuations. Similar results are expected, based on the criterion proposed by Watanabe and co-workers [3,6,14], as it is based on the same scaling, the only difference being the presence of a pre-factor in the equation to account for the different chemistries. From Figure 12, one cannot conclude, however, that the CRR limit depends on the polymer nature. On the other hand, the limiting value proposed by Read et al. [4] underestimates the limit between reptation and CRR-like motion. However, it is interesting to note that if τ obs is used instead of τ d,S to describe the CRR time in the criterion, i.e.: τ d,L τ obs Z 2 L > 1 (29) the combination of this condition with the definition of τ obs proposed by the authors (see Equation (4)) leads to a new critical value (with τ d,S approximated by Equation (24)): which is in good agreement with experimental data (see the black dashed line in Figure 12). Similarly, in the present work, we propose a new critical value for r * SG based on the condition (29) and the waiting time for a local CR-jump previously defined, τ obs ≈ 1.4τ d,S Z S (see Equation (26)): This expression, which is based on τ obs ∝ Z 2.5 S , leads to equally good results as Equation (30), based on τ obs ∝ Z 3 S . Thus, these results suggest that if CLF are taken into account in the Struglinsky-Graessley criterion, its critical value is well defined by τ obs τ d,S . Modeling the LVE of Self-Unentangled Long Chains Diluted in a Short Chain Matrix In this Section, the linear viscoelastic properties (LVE) of the different bidisperse blends presented in Section 3 are modeled, based on Equations (6)-(12), (21) and (26), to determine τ obs . The material parameters are given in Table 7 for the PS, PI, PMMA, and PBD polymers. Comparisons between predicted and experimental data are presented in Figures 13-16 for the four types of chemistries. A very good agreement is obtained for most of the samples, over the whole range of frequencies. This further validates the expression proposed to determine τ obs and suggests that the CRR process can correctly be described based only on the material parameters. This last result should be further validated in the future, also based on other polymer architectures. Polymers 2023, 14, x FOR PEER REVIEW 25 of 30 Figure 13. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PS chains in monodisperse state or diluted in various matrices (see Table 3). Figure 13. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PS chains in monodisperse state or diluted in various matrices (see Table 3). Figure 14. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PI chains in monodisperse state or diluted in various matrices (see Table 4). Figure 14. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PI chains in monodisperse state or diluted in various matrices (see Table 4). Figure 14. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PI chains in monodisperse state or diluted in various matrices (see Table 4). Figure 15. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PMMA chains in monodisperse state or diluted in various matrices (see Table 1). Figure 15. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PMMA chains in monodisperse state or diluted in various matrices (see Table 1). Figure 16. Comparison between predicted (continuous lines) and experimental (symbols) storage and loss moduli of the long PBD chains in monodisperse state or diluted in various matrices (see Table 2, 5 and 6). A very good agreement is obtained for most of the samples, over the whole range of frequencies. This further validates the expression proposed to determine and suggests that the CRR process can correctly be described based only on the material parameters. This last result should be further validated in the future, also based on other polymer architectures. Conclusions To conclude, an extensive dataset of PS, PI, PMMA, and PBD dilute bidisperse blends has been considered in order to examine the value of the release time associated with a short-long entanglement, , which governs the constraint release mechanism of the long chains. The value of was first determined from experimental linear viscoelastic data following the method described in Ref. [3]. This allowed us to test and discuss the different scaling of with the matrix molecular weight from the literature and to propose a new and simple expression, according to which ∝ . . Interestingly, it was found that, based on this expression, all the data of the CRR times collapse into a single curve within the experimental scatter, and the universal behavior of the long chain dynamics seems to be recovered for all polymer chemistries investigated in this work. Then, we tested the Struglinski-Graessley criterion. Instead of the original criterion , we con- Conclusions To conclude, an extensive dataset of PS, PI, PMMA, and PBD dilute bidisperse blends has been considered in order to examine the value of the release time associated with a short-long entanglement, τ obs , which governs the constraint release mechanism of the long chains. The value of τ obs was first determined from experimental linear viscoelastic data following the method described in Ref. [3]. This allowed us to test and discuss the different scaling of τ obs with the matrix molecular weight from the literature and to propose a new and simple expression, according to which τ obs ∝ Z 2.5 S . Interestingly, it was found that, based on this expression, all the data of the CRR times collapse into a single curve within the experimental scatter, and the universal behavior of the long chain dynamics seems to be recovered for all polymer chemistries investigated in this work. Then, we tested the Struglinski-Graessley criterion. Instead of the original criterion r SG , we considered the modified criterion r * SG proposed by Read et al. to account for CLF of the matrix. It was shown that the critical value for r * SG to obtain full CRR relaxation of the long chains is well described by the ratio τ obs τ d,S , considering both τ obs ∝ Z 2.5 S and τ obs ∝ Z 3 S . Finally, the new expression of τ obs was implemented in a CRR model and tested on the different binary blends containing self-unentangled long chains. A very good agreement between experimental and predicted linear viscoelastic data was obtained for all polymer chemistries, supporting the new equation for τ obs proposed in this study. To conclude, we proposed a new simple expression for τ obs that can be understood from a theoretical point of view and that can easily be implemented in tube models for different polymer chemistries. This is a first step towards the understanding of constraint release mechanisms in entangled bidisperse blends with self-entangled long chains. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/polym15061569/s1, Figure S1: Experimental storage and loss moduli of monodisperse PS samples [24] plotted under iso-Tg conditions (Tref-Tg = 60.4, symbols) and at Tdata = 167 • C (dashed grey lines), Figure S2: Shifted storage and loss moduli from reference [24] of PS316-39 blends (symbols), compared to the predictions obtained with the TMA model for the mono-disperse components PS39 and PS316 (black curves), Figure S3: (a) Storage modulus data of PS2810-72.4 and PI626-17.6 normalized by (a) G 0 N , or (b) G 0 N ν L , with respect to ωτ e , Figure S4: Comparison between experimental (symbols) viscoelastic storage modulus data of PI626 blends in different matrices with a modified CRR relaxation modeled by Equation (S2) (continuous curves) or with a pure CRR mechanism (dashed curves). See references [14,24,25,37]. Data Availability Statement: The data measured for this study are available on request from the corresponding author.
2023-03-24T15:27:34.344Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "bcad5580e1d008e9cdce3f4f705b7a421d47538d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/6/1569/pdf?version=1679406731", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b25ad7b4b640469f54798714cead4055fe166bbf", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
195833650
pes2o/s2orc
v3-fos-license
Counting rational curves on K3 surfaces with finite group actions G\"ottsche gave a formula for the dimension of the cohomology of Hilbert schemes of points on a smooth projective surface $S$. When $S$ admits an action by a finite group $G$, we describe the action of $G$ on the Hodge structure. In the case that $S$ is a K3 surface, each element of $G$ gives a trace on $\sum_{n=0}^{\infty}\sum_{i=0}^{\infty}(-1)^{i}H^{i}(S^{[n]},\mathbb{C})q^{n}$. When $G$ acts faithfully and symplectically on $S$, the resulting generating function is of the form $q/f(q)$, where $f(q)$ is a cusp form. We relate the Hodge structure of Hilbert schemes of points to the Hodge structure of the compactified Jacobian of the tautological family of curves over an integral linear system on a K3 surface as $G$-representations. Finally, we give a sufficient condition for a $G$-orbit of curves with nodal singularities not to contribute to the representation. Introduction Let S be a smooth projective K3 surface over C. In [YZ96] and [Bea99], the number of rational curves in an integral linear system on S is calculated using the relative compactified Jacobian. The idea is that the Euler characteristic of the relative compactified jacobian equals the number of maximally degenerate fibers if all rational fibers are nodal, and these are the rational curves we want. But the relative compactified jacobian is birational to the Hilbert scheme of points of S, the Euler characteristic of which is computed in [Göt90]. Hence we get the number of rational curves and the generating series: ( where N(n) is the number of rational curves contained in an n-dimensional linear system |L|, C n is the tautological family of curves over |L| with fibers being integral, S [n] is the Hilbert scheme of n points of S and ∆(t) = t n≥1 (1 − t n ) 24 is the unique cusp form of weight 12 for SL 2 (Z). In this paper G will always be a finite group. We will consider a smooth projective K3 surface S over C with a G-action, and ask whether we can prove a similar equality for G-representations. There are many different methods to obtain the generating series of the Euler characteristic of the Hilbert scheme of points of S. We follow the approach by Nakajima and Grojnowski [Gro96][Nak97] [Nak99], which describe the sum of the cohomology groups of all the Hilbert schemes of points ⊕ ∞ n=0 H * (S [n] ) as the Fock space F(H * (S)) (See Section 2). We will consider the G-equivariant Hodge-Deligne polynomial for a smooth projective variety X where the coefficients lie in the ring of virtual G-representations R C (G), of which the elements are the formal differences of isomorphism classes of finite dimensional C-representations of G. The addition is given by direct sum and the multiplication is given by tensor product. Define We may abbreviate F (X; u, v) (resp. E(X; u, v)) as F (X) (resp. E(X)). The main results are as follows. Theorem 1.1. Let S be a smooth projective surface over C with a G-action. Let S [n] be the Hilbert scheme of n points of S. Then we have the following equality as virtual G-representations. Corollary 1.2. Let S be a smooth projective surface over C with a G-action. If we fix p, q ≥ 0, then H p,q (S [n] , C) become stable for n ≥ p + q as G-representations. Definition 1.3. Let X and Y be smooth projective varieties over C with G-actions. They are called G-equivariant K-equivalent if there exists a smooth projective variety Z with a G-action and G-equivariant birational morphisms f : Z → X and g : Z → Y such that f * ω X ∼ = g * ω Y . Theorem 1.4. Let X and Y be smooth projective varieties over C with G-actions. If X and Y are G-equivariant K-equivalent, then as Hodge structures with G-actions. Recall that a linear system |L| is called an integral linear system if every effective divisor in it is integral. |L| is called G-stable if G induces an action on the projective space |L|, which means G maps an effective divisor in |L| again to an effective divisor in |L|. Corollary 1.5. Let S be a smooth projective K3 or abelian surface over C with a G-action, and let C n be the tautological family of curves over any n-dimensional integral G-stable linear system. Then we have the following equality as virtual Grepresentations, where J n (C n ) is the relative compactified jacobian. Remark 1.6. Note that we are fixing the surface S here, so the equality above should be understood as: if S admits an n-dimensional integral G-stable linear system, then E(J n (C n )) equals the coefficient of t n on the right hand side. Recall that for a complex K3 surface S with an automorphism g of finite order n, H 0 (S, K S ) = Cω S has dimension 1, and we say g acts symplectically on S if it acts trivially on ω S , and g acts non-symplectically otherwise, namely, g sends ω S to ζ k n ω S , 0 < k < n, where ζ n is a primitive n-th root of unity. We denote by [e(X)] the alternating sum of cohomology groups E(X; 1, 1) ∈ R C (G). For the right hand side of the equality in Corollary 1.5, we have Theorem 1.7. Let G be a finite group which acts faithfully and symplectically on a complex K3 surface S. Then ǫ(ord(g k ))t mk k for all g ∈ G, where ǫ(n) = 24 n p|n 1 + 1 p −1 . In particular, if G is generated by a single element g of order N ≤ 8, then we have )])t n equals t/∆(t), where ∆(t) = η(t) 24 is a level 1 cusp form of weight 12. But by Theorem 1.6, we deduce that when an element g of order N acts faithfully and symplectically on S, we have Theorem 1.9. Let G = g be a finite group generated by an automorphism g of order p, where p is a prime number. Suppose g acts non-symplectically on a complex K3 surface S. Then we have for all g ∈ G, g = 1, where d = rank T (g) p−1 , and T (g) := (H 2 (S, Z) g ) ⊥ is the orthogonal complement of the g-invariant sublattice. For the left hand side of the equality in Corollary 1.5, recall that in [Bea99], for the curve C in C n , the topological Euler characteristic e(J n (C)) = 0 if the nor-malizationC has genus ≥ 1, and the topological Euler characteristic e(J n (C)) = 1 if C is a nodal rational curve. Hence intuitively that is why e(J n (C n )) counts the number of rational curves in C n if we assume all rational curves in C n are nodal. But in our situation, e(J n (C)) = 0 does not mean [e(J n (C))] = 0 as G-representations. Hence non-rational curves may also contribute to [e(J n (C n ))], and certain G-orbits of curves contribute certain representations (See Example 1, 2 and 3). Nevertheless, we show that a G-orbit of curves with nodal singularities will contribute nothing if the normalization of the curve quotient by its stablizer is not rational. By this method, we are able to understand certain G-orbits in the linear system. We denote by [e(X)] the alternating sum of the compactly supported l-adic cohomology when we are in the situation of characteristic p. Theorem 1.10. Let C be an integral curve over F p with nodal singularities and a G-action. Suppose p ∤ |G|. Denote byC its normalization. IfC/ g is not a rational curve (i.e. P 1 ) over F p for every g ∈ G, then [e(J n C)] = 0 as G-representations. Corollary 1.11. Let C be an integral curve over F p with nodal singularities and a G-action. Suppose p ∤ |G|. Denote byC its normalization. IfC/G is not a rational curve (i.e. P 1 ) over F p , then [e(J n C)] = 0 as G-representations. By the above discussions, we show that the representation [e(J n (C n ))] actually 'counts' the curves in C n whose normalization quotient by its stablizer is rational (See Example 1, 2, and 3). This paper is organized as follows. In Section 2, we recall the Nakajima operators on the cohomology groups of Hilbert schemes of points, and we show that the theory works in the G-equivariant settings. In Section 3, we work with the G-equivariant Grothendieck ring and prove Theorem 1.4 via motivic integration. In Section 4, we deal with compactified jacobians and prove Theorem 1.10 and Corollary 1.11. In Section 5, we prove Corollary 1.5, Theorem 1.7 and Theorem 1.9 by using the results in previous sections. Then we give three explicit examples when G equals Z/2Z or Z/3Z. Finally, when G = P SL(2, 7), A 6 , A 5 or S 5 , we show that the smooth projective curve C over C with a faithful G-action must be of specific kind if there exists g ∈ G such that C/ g = P 1 . Hilbert schemes of points Let X [n] denote the component of the Hilbert scheme of a projective scheme X parametrizing subschemes of length n of X. For properties of Hilbert scheme of points, see references [Iar77], [Göt94] and [Nak99]. The following theorem is proved for smooth projective surfaces over C in [Göt90], and for smooth quasi-projective surfaces over C in [GS93]. Theorem 2.1. Let S be a smooth quasi-projective surface over C. Then the generating function of the Poincaré polynomials of the Hilbert scheme S [n] is given by One can also prove the above theorem for a smooth projective surface S by constructing an action of the Heisenberg algebra on the direct sum of the cohomology groups of all the S [n] [Gro96][Nak97] [Nak99]. We recall some of the constructions following [EG00]. A vector space V over Q is called a super vector space if there is a decomposition V = V + ⊕ V − of V into an even and an odd part. For any super vector space V , one can construct an algebra F(V ), which is called the Fock sapce. There is an isomorphism of graded vector space are the symmetric and alternating algebra on V . The grade is given by the exponent of the t powers. The element in the infinite tensor product should be understood as a finite linear combinition of the elements in some finite tensor product. One can also construct the current algebra (Heisenberg algebra) S(V ), which acts for i ≥ 1, u ∈ H r,s (X) and α ∈ H p,q (X [n] ). Proof of Theorem 1.1. Since a G-representation is determined by the trace, we can assume that G is a cyclic group without loss of generality. We deduce that as graded G-representation, where H p,q i (S) are eigenspaces of G acting on H p,q (S). Combining the fact that F(H * (S)) is mapped G-equivariantly to H(S) as S(H * (S))modules and the relation 1, we deduce that as virtual G-representations. If we only consider the cohomology groups, then Theorem 1.1 implies the following statements. Corollary 2.3. Let S be a smooth projective surface over C. as virtual G-representations, where b k are the Betti numbers of S, and where g j,i are the eigenvalues of g acting on H j (S, C), 0 ≤ j ≤ 4. We will need the above expression in Section 5. Now we prove Corollary 1.2. Proof of Corollary 1.2. We let as G-representation. Now fix p, q and take n ≥ p + q. Then we deduce that Notice that a p,q,0 (G(u, v, 1)) is a representation independent of n. Hence H p,q (S [n] , C) become stable as G-representations for n ≥ p + q. K-equivalent smooth projective varieties Recall that two smooth projective varieties X and Y are called K-equivalent if there exists a smooth projective variety Z and birational morphisms f : Z → X and The following result is proved by Kontsevich. Theorem 3.1. Let X and Y be smooth projective complex K-equivalent varieties. Then for all p, q. The proof uses motivic integrations and actually shows that in a localized and completed Grothendieck ring of varieties. For the proof in the non-equivariant case, see [Bli11] and [Loe09]. We will prove a G-equivariant version of the theorem. This definition was used in [LN20, §2.1] and [DL02, 2.9.]. Any G-action on a quasi-projective variety is a good G-action. For the equivariant motivic integration, see the discussions in [LN20], which follows [DL99]. For a smooth projective variety X, we denote the m-th jet space of X by J m (X) and the arc space of X by J ∞ (X). We will need an equivariant transformation rule, which is essentially proved in [LN20, Theorem 3.1]. If we let the measure take values in M G C , then we deduce the following. where the equality holds in M G C , and h ∞ : Proof of Theorem 1.4. Without loss of generality, we can assume G = µ is cyclic. By [DL02,3.4] (taking the Hodge realization of K 0 (Mot C,E )), we deduce that there is a map from M G C to the Grothendieck ring of Hodge structures with a µ -action. Hence it suffices to prove that Recall that if X and Y are G-equivariant K-equivalent, there exists a smooth projective variety Z with a G-action and G-equivariant birational morphisms f : Z → X with respect to both f and g. Suppose dimX = dimY = n. Then Corollary 3.4. Let X and Y be smooth projective algebraic varieties over C with G-actions. If X and Y have trivial canonical bundles and there is a G-equivariant Proof. Since there is an equivariant birational map between smooth projective varieties X and Y , we deduce that X and Y are G-equivariant K-equivalent by the Hence by Theorem 1.4, it follows that F (X; u, v) = F (Y ; u, v). Compactified Jacobians We will need to consider varieties over finite fields in this section, since we do not know whether Lemma 4.5 is true for singular cohomology in characteristic 0. Recall some facts from [AK76], [Ale04] and [EGK00]. Let C/S be a flat projective family of integral curves. By a torsion-free rank-1 sheaf I on C/S, we mean an S-flat coherent O C -module I such that, for each point s of S, the fiber I s is a torsion-free rank-1 sheaf on the fiber C s . We say that Given n, consider theétale sheaf associated to the presheaf that assigns to each locally Noetherian S-scheme T the set of isomorphism classes of torsion-free rank-1 sheaves of degree n on C T /T . This sheaf is representable by a projective Sscheme, denotedJ n C/S . It contains J n := Pic n C/S as an open subscheme. For every S-scheme T , we have a natural isomorphismJ n C T /T =J n C/S × T . If S = Speck for an algebraically closed field k, we denoteJ n C/S by J n C. Recall that at the beginning we are considering C, which is the tautological family of curves over an n-dimensional integral G-stable linear system. Since C has a stratification according to the geometric genus of the fibers and the G-action (see §6), we can temporarily focus our attention on J n C for a single singular curve C with a G-action (note that our G-action on J n C is given by pushing forward the torsion-free rank-1 sheaves). This is reasonable since we have is the alternating sum of the compactly supported l-adic cohomology groups. Proof. One way to see this is to consider the bounded long exact sequence 0 → Now we have an integral curve C over F p . Recall that J n C parametrizes the isomorphism classes of torsion-free rank-1 sheaves of degree n on C, and we have the following facts [Bea99]. Proposition 4.2. Let C be an integral curve over an algebraically closed field k. (1) If L ∈ J n C is a non-invertible torsion-free rank 1 sheaf, then L = f * L ′ , where L ′ is some invertible sheaf on some partial normalization f : C ′ → C. (2) If f : C ′ → C is a partial normalization of C, then the morphism f * : J n C ′ → J n C is a closed embedding. Using these two facts, we obtain the following corollary. Corollary 4.3. Let all the singularities of an integral curve C be nodal singularities. Then J n C has the following stratification where J n C parametrizes rank-1 torsion-free sheaves of degree n, and C ′ goes through all partial normalizations of C (including C itself). Now let J n C ′ be some stratum which is preserved by G. We want to calculate the Here we need to make use of the short exact sequence of algebraic groups where L is a smooth connected linear algebraic group [BLR90, §9 Corollary 11], and C ′ is the normalization of C ′ . Since L is linear, we have that J n C ′ is a principal Zariski fiber bundle over J nC ′ [Ser88, Chapter VII, Proposition 6]. Now we need to prove the following lemma, which is used to prove lemma 4.5. Lemma 4.4. Let X and Y be two smooth projective varieties over F p with finite group G-actions. Suppose X, Y and the actions of G can be defined over F q , where q is a p power. If |X(F p ) gF q n | = |Y (F p ) gF q n | for every n ≥ 1 and g ∈ G, then Denote by F q the geometric Frobenius over F q . Since the finite group action is defined over F q , the action g commutes with F q and the action of g on the cohomology group is semisimple. There exists a basis of the cohomology group such that the actions of g and F q are in Jordan normal forms simultaneously. Let α i,j , j = 1, 2, ..., a i (resp. β i,j , j = 1, 2, ..., b i ) denote the eigenvalues of F q acting on H i (X, Q l ) (resp. H i (Y, Q l )) in such a basis, where a i (resp. b i ) is the i-th betti number. Let c i,j , j = 1, 2, ..., a i (resp. d i,j , j = 1, 2, ..., b i ) denote the eigenvalues of g acting on the same basis of H i (X, Q l ) (resp. H i (Y, Q l )). Then the Grothendieck for every n ≥ 1. By linear independence of the characters χ α : Z + → C, n → α n and the fact that α i,j , β i,j , j = 1, 2, ... all have absolute value q i/2 by Weil's conjecture, we deduce that a i = b i and a i j=1 c i,j = b i j=1 d i,j for each i. But since g is arbitrary, this implies that the G-representations H i (X, Q l ) and H i (Y, Q l ) are the same. Proof. We first deal with the case when E = B × F is a trivial bundle. We begin with a homotopy argument. Fix g ∈ G. By assumption, we have a commutative commutes. On the other hand, we have The automorphism φ acts on it and, at b ∈ B it acts the way g b acts on H i c (F, Z/nZ). Since an endomorphism of a constant sheaf over a connected base is constant, the action of φ is the same everywhere. Passing to limit, we deduce that the actions of g b on H * c (F, Q l ) are the same for every b ∈ B. Suppose E, B, F and the G-actions are defined over F q . Fix n > 0. If b 1 , b 2 ∈ B are fixed points of gF q n , then by what we just proved and the Lefschetz trace formula, Hence we have the following equality. Since the equality holds for all n > 0, by the proof in Lemma 4.4, the lemma follows. Now for the general case, we fix g ∈ G. It suffices to prove that the action of g b 1 on H * c (F, Q l ) is the same as the action of g b 2 for any b 1 , b 2 ∈ B fixed by gF q n . Take open neighborhoods U 1 , U 2 of b 1 , b 2 which trivialize the bundle. Replacing U by ∩ ∞ n=0 g n (U), we can assume U 1 , U 2 are g-stable and connected since B is irreducible. Now let V = U 1 ∩ U 2 and take any closed point b 0 ∈ V . By the discussion in the trivial bundle case, we deduce that the action of g b 1 is the same as the action of g b 0 , which is the same as the action of g b 2 . Hence we have for all n > 0, and we are done. Proof. Let f * : J n C → J nC be the pullback map. Since g is an automorphism on C andC, we have g * f * = f * g * . Now we use Lemma 4.5. Now to prove Theorem 1.10, we first prove the following statement about [e(J n C)]. Lemma 4.7. Let C be an integral curve over F p with nodal singularities and a Gaction. Suppose p ∤ |G|. IfC/ g is not a rational curve for every g ∈ G, then [e(J n C)] = 0 as G-representations. Proof. By Corollary 4.6, it suffices to prove [e(J nC )] = 0, which is equivalent to Tr(g, [e(J nC )]) = 0 for any g ∈ G. But J nC is an abelian variety, which means H i (J nC , Q l ) ∼ = ∧ i H 1 (J nC , Q l ). Since p ∤ |G|, the curveC/ g is smooth. Then sinceC/ g is not rational, we have H 1 (C/ g , Q l ) = 0. Hence H 1 (J nC , Q l ) g = 0, which implies Tr(g, [e(J nC )]) = 0. This is because H 1 (J nC , Q l ) = V 0 ⊕ V 1 , where V 0 is the non-empty eigenspace of g with eigenvalue 1, and V 1 is its complement. Now with the help of Corollary 4.3, we can prove Theorem 1.10 and Corollary 1.11. Proof of Theorem 1.10. Fix g ∈ G. Recall that J n C = C ′ →C J n C ′ by Corollary 4.3. Depending on the action of g on the nodes of C, g * permutes or acts on the strata J n C ′ . For any union J n C ′ of two or more strata permuted by g * cyclically, the trace of g on H i c ( J n C ′ ) equals 0 since g acts by cyclically permuting the components H i c (J n C ′ ). For the stratum which is stable under g, the trace of g is also 0 by Lemma 4.7. Hence [e(J n C)] = 0 by Lemma 4.1. Proof of Corollary 1.11. IfC/G is not a rational curve, thenC/ g is not rational for any g ∈ G. Rational curves on surfaces Let S be a smooth projective K3 surface over C with a G-action, and let C be the tautological family of curves over an n-dimensional integral linear system |L| acted on by G. Then J n C is a smooth projective variety over |L| whose fiber over a point t ∈ L is the compactified jacobian J n C t . Choose some good reduction over q such that 'everything' (J n C, S, G-action etc.) is defined over F q , where q is a p power for a prime number p. If we choose a large enough p, then |L| is still integral after the reduction. Notice that |L| has a stratification where each stratum B satisfies Stab G (t) = H for every t ∈ B and some subgroup H, and the fibers C t of the stratum have the same geometric genus. This is because for any subgroup H in G, |L| H \∪ H ′ H |L| H ′ is a locally closed subspace. The reason for the stratification by the geometric genus is that the geometric genus gives a lower semicontinuous function Proof of Theorem 1.7. By Corollary 2.3, we deduce that Recall the definition of the Dedekind eta function η(t) = t 1/24 ∞ n=1 (1 − t n ), where t = e 2πiz . Fix a generator g of G. If N is a prime number p, we notice that ord(g k ) = 1 if p|k, and ord(g k ) = p otherwise. Hence If N = 4, we have If N = 6, we have If N = 8, we have Proof of Theorem 1.9. By Corollary 2.3, we have Fix g = 1 and notice that S g is the same as S g k for p ∤ k. We deduce Tr(g, [e(S)]) = Tr(g k , [e(S)]) = 24 − dp by Theorem 5.2. Hence Example 1 (Z/2Z). Here we look at an explicit K3 surface with a symplectic Z/2Z-action. Consider the elliptic K3 surface S defined by the Weierstrass equation where (a 1 , a 2 ) ∈ C 2 , (b 1 , ..., b 6 ) ∈ C 6 are generic. The fibration has 24 nodal fibers (Kodaira type I 1 ) over the zeros of its discriminant polynomial and those zeros do not contain 0 and ∞. The automorphsim of order 2 σ(x, y, t) = (x, −y, −t) acts non-trivially on the base of the fibration and preserves the smooth elliptic curves over t = 0 and t = ∞. Now denote one of the fibers by L, then |L| is a σ -invariant integral linear system and all of the singular curves in |L| are nodal rational curves. We want to understand the σ-orbits in |L|. Since we know explicitly the action of σ, by calculation we know that there are 4 σ-fixed points on the fiber over t = 0 and 4 σ-fixed points on the fiber over t = ∞. So σ has 8 isolated fixed points, hence it is a symplectic involution. Now we know from Theorem 1.7 that is the 1-dim representation on which σ has eigenvalue s. This is because for a σ-stable smooth curve C of genus g whose quotient by σ is rational, by a similar argument as in the proof of Lemma 4.7, we deduce that [e(J g (C))] = 2 2g−1 V 1 − 2 2g−1 V −1 . But since we already know the representation [e(JC)], by calculation we have n 1 = 12 and n 2 = 2. On the other hand, this coincides with the geometric picture. From the definition of σ we observe that there are indeed 12 σ-orbits of nodal rational curves. Denote by C 0 , C ∞ the fibers over t = 0, ∞. Since σ preserve C 0 and there are 4 σ-fixed points, we deduce that the degree 2 morphism C 0 → C 0 / σ has 4 ramification points. Hence by the Riemann-Hurwitz formula C 0 / σ is smooth rational. By the same argument, C ∞ / σ is also smooth rational. This is what we have expected since there should be two such curves from the calculation of the representations. Example 2 (Z/3Z). Here we look at an explicit K3 surface with a non-symplectic Z/3Z-action [AST11, Remark 4.2]. Consider the elliptic K3 surface S defined by the Weierstrass equation where (a 1 , a 2 ) ∈ C 2 , (b 1 , ..., b 4 ) ∈ C 4 are generic. The fibration has 24 nodal fibers (Kodaira type I 1 ) over the zeros of its discriminant polynomial and those zeros do not contain 0 and ∞. The automorphism of order 3 σ(x, y, t) = (x, y, ζ 3 t) acts non-trivially on the basis of the fibration and preserves the smooth elliptic curves over t = 0 and t = ∞. Now denote one of the fibers by L, then |L| is a σ -invariant integral linear system and all of the singular curves in |L| are nodal rational curves. We want to understand the σ-orbits in |L|. But since we already know the representation [e(JC)], by calculation we have n 1 = 8 and n 2 = 1. On the other hand, this coincides with the geometric picture. From the definition of σ we observe that there are indeed 8 σ-orbits of nodal rational curves. Since the action of σ is explicit, by calculation we know that there are 3 σ-fixed points on the fiber over t = ∞. Denote by C ∞ the fiber over t = ∞. Then this implies that the degree 3 morphism C ∞ → C ∞ / σ has 3 ramification points each of order 2. Hence by the Riemann-Hurwitz formula C ∞ / σ is smooth rational, which is what we have expected. Example 3 (2-dim Z/2Z). Let S be a K3 surface given by the double cover of P 2 branched over a smooth sextic curve C in P 2 . Let τ be the involution on P 2 sending (x : y : z) to (−x : y : z). Denote the covering involution by i : S → S. Then if we suppose C is τ -invariant, the 'composition' of τ and i will give a symplectic involution σ on S ([GS07, Section 3.2]). The fixed locus of τ on P 2 consists of a point x 0 = (1 : 0 : 0) and a line l 0 = {(x : y : z)|x = 0}. Denote the six intersection points of l 0 and C by x 3 , x 4 , ..., x 8 . Let π : S → P 2 be the double cover map. Denote the two points in π −1 (x 0 ) by x 1 , x 2 . Then the fixed locus of σ is the eight points x 1 , x 2 , ..., x 8 . Notice that σ commutes with i and the induced action of σ on P 2 is just τ . Now let L = π * O P 2 (1). Then the linear system |L| consists of the curves which are the preimages of the lines in P 2 under π. For a generic choice of C, |L| is a σ-invariant integral linear system. A generic line will intersects C in six points, and its preimage is a smooth genus 2 curve. Some lines will intersect C in a tangent point and 4 other distinct points, and their preimages are curves with one node. The other lines are the 324 bitangents of C, which can be seen from the Plücker formula or the coefficient of t 2 in ∞ n=1 (1 − t n ) −24 . Let C → |L| be the tautological family of curves over |L|. Now we know from The preimage of l 0 is a smooth genus 2 curve, and it has 6 ramification points x 3 , x 4 , ..., x 8 under σ. Hence its preimage quotient by σ is a smooth rational curve, and it will contribute 8V 1 − 8V −1 to the representation by a similar argument as in the proof of Lemma 4.7. The preimages of {(x : y : z)|by + cz = 0} are more complicated. A generic line will intersect C in six points, and its preimage is a smooth curve of genus 2. It has 2 ramification points x 1 , x 2 under σ. Hence its preimage quotient by σ is an elliptic curve, and it will contribute nothing to the representation. Lemma 5.5. Let C 2 be an integral curve of arithmetic genus 2 with two nodes over F p . If there is a involution σ acting on it, and σ permutes the nodes, then we have [e(JC 2 )] = V 1 as Z/2Z-representations. Proof of Lemma 5.4. Since C 1 is an integral curve of arithmetic genus 2 with one node, its normalization π :C 1 → C 1 is an elliptic curve, and we denote it by E. By On the other hand, since σ fixes the points over the node, we deduce from the short exact sequence Proof of Lemma 5.5. Since C 2 is an integral curve of arithmetic genus 2 with two nodes, its normalization π :C 2 → C 2 is a rational curve. It also has two partial normalizations by resolving one of the nodes π 1 : C ′ 2 → C 2 and π 2 : C ′′ 2 → C 2 . By Corollary 5.3 and Corollary 5.6, we have On the other hand, G m is an affine curve. So dim H 2 c (G m , Q l ) = 1 and dim H 0 c (G m , Q l ) = 0. Since the topological Euler characteristic of G m is 0, we also have dim H 1 c (G m , Q l ) = 1. Notice that σ permutes two G m 's, and hence by the Künneth formula we have Combining the above discussion, we have [e(JC 2 )] = V 1 . Finally, let us give some discussions when G equals a certain finite simple group. Example 4 (PSL(2,7)). Let S be a complex K3 surface acting faithfully by G = P SL 2 (F 7 ). Such a K3 surface exists. For example, P SL(2, 7) acts faithfully and symplectically on the surface X 3 Y + Y 3 Z + Z 3 X + T 4 = 0 in P 3 by means of a linear action on P 3 [Muk88]. We know from Theorem 1.10 that a G-stable curve C with nodal singularities in an integral linear system does contribute to the representation [e(J d C)] only if there exists some g ∈ G such thatC/ g = P 1 . It turns out that if this happens, thenC must be the Klein quartic, which is the Hurwitz surface of the lowest possible genus. Notice that G acts on C faithfully since any non-trivial element of G acts symplectically on S and cannot fix curves. Proposition 5.6. Let C be a smooth projective curve over C with a faithful G = P SL(2, 7)-action. If there exists g ∈ P SL(2, 7) such that C/ g = P 1 , then the genus of C is 3 and g has order 7. In particular, the automorphism group of C reaches its Hurwitz bound, and hence C is the Klein quartic. Proof. The idea is to use the equivariant Riemann-Hurwitz formula [Ser79, Chapter VI §4] for π : C → C/G = P 1 . We have [e(C)] = e(P 1 )I 1 − p∈P 1 as G-representations, where h p is the stablizer of some point over p, I hp denotes the induced representation Ind G hp ½, and ½ is the 1-dim trivial representation. Notice that I hp is independent of the point we choose over p. sides, the first one gives dimension -12 and the second gives dimension 6. Hence the only possibility is H 1 (C, C) = I 1 − I 2 − I 3 − I 7 + 2½ = χ 2 + χ 3 , which shows that the genus of C is 1 2 dimH 1 (C, C) = 3. We also deduce from this argument that g must has order 7 since the element of order not equal to 7 does have fixed vectors in χ 2 and χ 3 . Following this observation, we do the calculations for some other groups in Mukai's list [Muk88]. Example 5 (A 6 ). G = A 6 acts faithfully and symplectically on the K3 surface 6 1 X i = 6 1 X 2 i = 6 1 X 3 i = 0 in P 5 via permutation action of coordinates on P 5 . Then by Theorem 1.10, a G-stable curve C with nodal singularities in an integral linear system will not contribute to the representation [e(J d C)]. Proposition 5.7. Let C be a smooth projective curve over C with a faithful G = A 6action. Then for any g ∈ A 6 , we have C/ g = P 1 . Proof. We have the following character table for A 6 . Example 6 (A 5 ). G = A 5 acts faithfully and symplectically on the K3 surface 5 1 X i = 6 1 X 2 i = 5 1 X 3 i = 0 in P 5 via permutation action of the first 5 coordinates on P 5 . Then by Theorem 1.10, a G-stable curve C with nodal singularities in an integral linear system can contribute to the representation [e(J d C)] only ifC is rational. Proposition 5.8. Let C be a smooth projective curve over C with a faithful G = A 5action. If there exists g ∈ A 5 such that C/ g = P 1 , then C must be a smooth rational curve. Example 7 (S 5 ). G = S 5 acts faithfully and symplectically on the K3 surface 5 1 X i = 6 1 X 2 i = 5 1 X 3 i = 0 in P 5 via permutation action of the first 5 coordinates on P 5 . Then by Theorem 1.10, a G-stable curve C with nodal singularities in an integral linear system can contribute to the representation [e(J d C)] only ifC has genus 4. In this caseC has the largest possible automorphism group for a genus 4 curve andC is Bring's curve. Proposition 5.9. Let C be a smooth projective curve over C with a faithful G = S 5action. If there exists g ∈ S 5 such that C/ g = P 1 , then C has genus 4 and g has order 5. In particular, C is Bring's curve.
2019-07-09T13:03:06.041Z
2019-07-07T00:00:00.000
{ "year": 2019, "sha1": "3a0fda7b499c323daae0a9293117ad22d478fc93", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1907.03330", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c17191efaa9508dfc9a670554930ed03d85e7039", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
266180817
pes2o/s2orc
v3-fos-license
Reproductive Biology of Striped Snakehead ( Channa striata , Bloch, 1793) in Floodplain of Lubuk Lampam, South Sumatra Striped Snakehead (Channa striata) is one of the fish species inhabiting the flooded floodplain. Currently, the striped snakehead fish population in the Lubuk Lampam floodplain is undergoing a significant decline due to continuous and unregulated fishing activities. To prevent this population decline, there is a need for comprehensive efforts in managing their reproduction. The objective of this study was to investigate the reproductive behavior of striped snakehead fish in the floodplain of Lubuk Lampam, South Sumatra. The research spanned five months, from December 2022 to April 2023, and involved the collection of fish samples from four different floodplain types: river, lebung, lebak kumpei, and rawang. A total of 284 striped snakehead fish were examined, and measurements of length-weight and their reproductive biology were observed. The findings revealed that the sex ratio of striped snakehead fish was skewed toward females, with a ratio of 1:1.7 (females to males). The size at which female striped snakehead fish reach maturity was determined to be 28.5 cm, while male striped snakehead fish matured at 29.30 cm. The peak spawning season for striped snakehead fish occurred in December, coinciding with the rainy season. The spawning grounds for striped snakehead fish were predominantly located in the lebung station in the upper reaches of the Lubuk Lampam Floodplain. Striped Snakehead fish exhibited a partial spawning behavior. Furthermore, these fish demonstrated a relatively high reproductive potential, with fecundity ranging from 5,859 to 30,321 eggs. INTRODUCTION Striped snakehead, known as Channa striata, inhabit various environments such as floodplains, muddy watersheds, rice fields, and brackish water, and and it is a fish species that is particularly localized in Indonesian aquatic ecosystems.(Iqbal 2011, Widyastuti et al. 2017).The Striped Snakehead's distribution spans across several Indonesian islands, including Sumatra, Kalimantan, Java, and various other islands.(Mujiatami. 2015). Striped Snakehead boasts a rich nutritional profile, boasting a protein content that can reach up to 30%.Beyond its nutritional value, striped snakehead fish offers a range of benefits, including Reproductive Biology of,…Khoirul F, Etty R and Hurip P,…Sainmatika,…Volume 20,…No.2,…December 2023,…102-113 103 p-ISSN 1829 586X e-ISSN 2581-0170 its potential to aid in wound healing due to its abundance of omega-3 and omega-6 fatty acids.(Andrie et al. 2018;Nofriyanti et al. 2020).In the region of South Sumatra, striped snakehead fish holds significant economic importance and is extensively utilized in processed products such as crackers, pempek, and salted fish, contributing to its high commercial value (Iqbal et al. 2018).According to Gustiano et al. (2019), Striped Snakehead serves as a primary ingredient for producing crackers and pempek.At the South Sumatra Market, Striped Snakehead is typically available for sale at prices ranging from IDR 45,000 to IDR 100,000 per kilogram.In Indonesia, economically speaking, C. striata holds significant value as a commodity, with a price range of IDR 30,000 to IDR 60,000/kg (Rahayu et al. 2021). Among various fish species, the catch of Strped Snakehead in public waters accounts for the largest share, constituting approximately 14.2% of the total catch.(Kartamihardja, 2014).In South Sumatra, floodplains serve as significant production hubs for the capture of Striped Snakehead.An illustrative instance of such a flooded swamp area is the Lubuk Lampam Floodplain.Lubuk Lampam is a crucial region for fisheries and is situated within the Lebak Ogan Komering Ilir area.From 2019 to 2020, the catch of Striped Snakehead in the Lubuk Lampam Floodplain of South Sumatra decreased, going from 492 tons to 341 tons.Over the past five years, there has been a growing demand for Striped Snakehead in South Sumatra.This escalating demand has led to uncontrolled exploitation of Striped Snakehead in the waters of Lubuk Lampam, posing a serious threat to the population of this species in the area, which is expected to experience a significant decline in numbers as a consequence.To prevent the decline in population, it is essential to implement management strategies that focus on the comprehensive understanding of the reproductive aspects of Striped Snakehead.Currently, there is a lack of available information regarding the reproductive biology of Striped Snakehead in the floodplains of Lubuk Lampam. Understanding the reproductive biology of fish is a critical element in the management and sustainable exploitation of fisheries resources.Reproductive infromation plays a pivotal role in bolstering the success of breeding programs.Achieving successful fisheries management hinges on the precision of fecundity assessments, which provide valuable insights into the potential for fish populations to rebound and recover.(Ath-thar et al. 2017;Saputra et al. 2017).It is known that the development of gonad maturity levels can be related to the size of the fish, specifically the length at which the gonads initially become mature.This information can serve as a foundation for establishing regulations on the types of fishing gear permissible for use in floodplain areas.Additionally, this information can also be used as a basis for population management and fishing management in Floodplain Lubuk Lampam.To fully support these endeavors, it is crucial to gather information about the reproductive behavior of Striped Snakehead in the Lubuk Lampam Floodplain.Ideally, this information should be comprehensive and thorough.Hence, conducting research on the reproductive patterns of Striped Snakehead (Channa striata) is imperative as a fundamental step in managing Striped Snakehead populations in the Floodplain of Lubuk Lampam, South Sumatra. Reproductive Biology of,…Khoirul F, Etty R and Hurip P,…Sainmatika,…Volume 20,…No.2,…December 2023,…102-113 The studies conducted in this study covered various aspects of fish reproduction, including sex ratio, gonadal maturity level, gonadal maturity index, size at first maturity (Lm), spawning season, spawning type, spawning site and reproductive potential.The data obtained from this research study can serve as a foundation for the management of striped snakehead fish in the floodplain.The purpose of this study was to study the reproductive biology of Striped Snakehead (Channa striata, Bloch, 1793) in the Floodplain of Lubuk Lampam, South Sumatra. Location and Time of Research The research was conducted in the Lubuk Lampam Floodplain, situated within the Ogan Komering Ilir Regency of South Sumatra Province.The process of fish sampling extended over a span of five months, ranging from December 2022 to April 2023.Fish sampling stations were established in four distinct floodplain types within Lubuk Lampam, including: 1) Lebak kumpei type; 2) Rawang; 3) Lebung; 4) River. The field observation procedure involved several steps.Initially, the length and weight of the sampled fish were measured.Subsequently, the fish were dissected using a dissecting set.This surgical procedure aimed to determine the sex and assess the maturity level of the fish's gonads.The method of determination is detailed in Table 1 and Table 2 for reference.Both male and female snakehead gonads were weighed using digital scales with a precision of 0.001 g.Subsequently, the gonads of female Striped Snakehead were placed into sample bottles, preserved with a 4% formalin solution, and labeled for identification and storage. IV In young fish, the gonads resemble a pair of thread-like structures extending along the lateral sides of the peritoneal cavity towards the front.They exhibit a transparent color and possess a smooth surface As they mature, the gonads become larger, exhibiting a yellowish-white coloration.At this stage, individual eggs cannot be discerned with the naked eye of the fish.In adult fish, the gonads occupy nearly half of the peritoneal cavity.At this stage, the eggs become visible to the naked eye, appearing as fine grains.Additionally, the color of the gonads shifts to a greenish-yellow hue.The peritoneal cavity is predominantly occupied by fully mature gonads that have a brownish and very dark appearance.In comparison to GML III, it is evident that the eggs of GML IV are larger in size.Table 2. Gonadal maturity level (GML) of male Striped Snakehead based on morphological characters (Efendie (1979); Karmon (2011)). GML Description I II The gonads are in the form of a pair of threads, but they are notably shorter than the ovaries of female fish at the same developmental stage.They are also transparent in color.The gonads are milky white in color and look larger than the first-grade gonads. Laboratory Analysis Laboratory examinations involve the assessment of parameters such as fecundity and egg diameter.Fecundity observations were carried out on 53 GML IV female Striped Snakehead.The gonad weight of the GML IV female Striped Snakehead was taken 10% of the total gonad weight in the anterior, middle and posterior parts, then the number of eggs was counted to determine the value of fecundity. A total of 53 gonad samples were collected from female cork fish at Gonadal Maturity Level (GML) IV, and their fecundity values were computed.Subsequently, the diameter of the eggs was measured as part of the study.From each gonad sample, 150 eggs were taken, with 50 eggs taken from each of the anterior, middle, and posterior sections.This resulted in a total of 7,950 eggs whose diameters were observed in the study.Egg diameter was measured under a microscope equipped with an ocular micro-meter.The measured egg diameter is the longest recorded size.The acquired data is initially transformed by multiplying it by a conversion factor of 0.025. Fish Sampling Taking fish samples can represent the biological conditions of Striped Snakehead in Floodplain Lubuk Lampam.The collection of fish samples was carried out using the Simple Random Sampling Technique (PCAS) with an interval of one month.The sample fish were obtained from fishermen's catches using various types of fishing gear (multifishing gear), both selective and non-selective fishing gear.Fish samples obtained from fishermen were alive, then grouped based on station and time. Data analysis • Sex ratio According Data analysis to determine the size at first maturity (Lm) was estimated using the Spearman-Karber method (Udupa, 1986).The formula used is as follows: M = (Xk + X/Z) -(X, ∑pi) The length range is estimated by the equation: Antilog [m ± 1,96 ( ( )) Nilai var (m) = (X) 2 x ∑[(pixqi)/(ni -1)] Description: Lm = the length of the fish when the gonads first mature is antilog m m = log length of fish at first gonad maturity Xk = log of the median length class for the last time the gonads matured  Reproductive Potential The reproductive potential of striped snakehead fish is based on its Fecundity value.In accordance with Efendie (1992), the total fecundity calculation involves the utilization of the gravimetric method.Initially, all gonads that contain eggs were air-dried.Subsequently, the weight of the entire dried gonads was measured, along with the weight of a portion of the dried gonads, using the following formula: ( ) Description: F = Total number of eggs contained in the gonads (fecundity) G = gonad weight per fish g = The weight of a section of the gonad (sample) for each individual fish n = Quantity of eggs derived from the sampled portion of the gonad  Spawning Type Spawning type of Striped Snakehead (Channa striata) was determined using data on the diameter of the eggs.  Spawning Places The identification of spawning sites for Striped Snakehead was accomplished by assessing the number of fish with mature gonads (GML IV) captured and comparing this with the highest Gonadal Maturity Index (GMI) values among the sampling stations during the study.The station with the highest number of mature fish and the highest GMI value was considered as the spawning ground for Striped Snakehead in the Floodplain of Lubuk Lampam. Sex Ratio The number of Striped Snakehead samples during the study was 284 consisting of 179 females and 105 males.The sex ratio of Striped Snakehead at GML IV was 1:1.7 (female).This comparison value shows that a fish population in a waters is not balanced.A ratio of 1:1.7 suggests a potential need for a higher number of females for successful spawning, even though it's possible that the Striped Snakehead group reproduces with a 1:1 sex ratio.The results of Selviana et al. (2020).pattern 1: 1 (Jacob. Reproductive Biology of,…Khoirul F, Etty R and Hurip P,…Sainmatika,…Volume 20,…No.2,…December 2023,…102-113 107 p-ISSN 1829 586X e-ISSN 2581-0170 2015).Deviations in the sex ratio of the pattern (1:1) can arise from various factors which include differences in distribution, activity and movement of fish (Turkmen et al. 2002), male and female sexual turnover and variation in growth period, mortality and length of life Deviances from the 1:1 sex ratio pattern can result from various factors, including disparities in fish distribution, activity, and movement (Turkmen et al. 2002), turnover rates of male and female individuals, variations in growth periods, mortality rates, and lifespan lengths (Simanjuntak. 2007). Size at first maturity of gonads Data regarding the level of gonad maturity, as correlated with the total length of the Striped Snakehead, is presented in Figure 2 and Figure 3.According to the calculations using the Spearman-Karber method based on the data from Figure 2 and Figure 3, it is determined that the size at which male Striped Snakehead gonads mature is approximately 29.30 cm, while the size at first maturity for female Striped Snakehead is approximately 28.5 cm.This data indicates that the size at which female Striped Snakehead's gonads mature is smaller than that of the male Striped Snakehead.Prianto (2015) noted that the gonad maturity size typically occurs at a faster rate in female fish compared to male fish.The rate of maturity in female fish is believed to be closely associated with environmental factors in the Floodplain of Lubuk Lampam. In a separate study conducted by Selviana et al. (2020) on Striped Snakehead in the Floodplain of the Sebangau River, it was found that the size of first gonad maturity for female Striped Snakehead was 27.5 cm, while for male Striped Snakehead, it was 32.17 cm.As reported by Irhamsyah et al. (2018), in their research conducted in the Upper South River of Central Kalimantan, they found that female Striped Snakehead first matured at a size of 27.8 cm, while male Striped Snakehead matured at 32.3 cm.The size at first maturity (Lm) for female Striped Snakehead in Sempor Reservoir is 28.5 cm while for male Striped Snakehead, it is 30.5 cm.(Purnawan, 2021).According to Karmon (2011), in their research conducted in the Musi Swamp Watershed, the size of the first gonad maturity for male Striped Snakehead was 24.4 cm, while for female Striped Snakehead, it was 27.7 cm Spawning Season The data regarding the gonadal maturity levels of male and female Striped Snakehead, categorized by the month of observation, can be observed in Reproductive Biology of,…Khoirul F, Etty R and Hurip P,…Sainmatika,…Volume 20,…No.2,…December 2023,…102-113 Meanwhile, data on the average GMI of male and female Striped Snakehead categorized by the month of observation, can be observed in Figure 5. Based on Figure 4, it is evident that GML IV (gonad maturity level) for both male and female Striped Snakehead is most frequently observed in December and January.Similarly, Figure 5 indicates that the average Gonad Maturity Index (GMI) falls within the range of 7.52% to 9.3% for females and 3.05% to 4.22% for males.The gonad maturity level of Striped Snakehead (GML I -GML V) is always obtained every month, this indicates that Striped Snakehead can spawn throughout the year (Figure 4).According to Tamsil (2016), the consistent presence of fish at GML V every month not only suggests that these fish are capable of spawning on a monthly basis but also indicates that spawning occurs throughout the year. In December, the highest Gonad Maturity Index (GMI) values were recorded for female Striped Snakehead at 9.3% and for male Striped Snakehead at 4.22%.In contrast, GMI values for the other months exhibited relatively fluctuating patterns.Based on GML IV and GMI data (Figures 4 and 5) for both male and female Striped Snakehead, it is suspected that the spawning season for Striped Snakehead occurs in December -January and peaks in December when the rainy season begins.According to Selviana et al. (2020), the peak of the Striped Snakehead spawning in the Sebangau River Floodplain flow occurs in October (rainy).Makmur & Prasetyo (2006), Striped Snakehead fish the waters of the Sambujur River sanctuary spawn all year round with spawning peaks in the rainy season, namely October -December and have GMI values ranging from 0.01 -4.83%.Based on Prianto et al. (2015), most of the fish in flooded swamps spawn during the rainy season.According to Welcome (1985), in tropical river ecosystems, the prime time for spawning in most fish species occurs during periods of river water overflow or flooding.Additionally, Lagler (1972) noted that fluctuations in water levels can influence and potentially trigger fish reproduction.Furthermore, there is an increased availability of food as the fish and other creatures that serve as the Striped Snakehead's prey also thrive and reproduce. Spawning Place Figure 6 presents the data illustrating the gonadal maturity levels of both male and female Striped Snakehead at each station.The research data depicted in Figure 6 shows that the spawning grounds for Striped Snakehead are located at the lebung station in the upper reaches of the Floodplain Lubuk Lampam.At this particular location, there is a significant presence of mature male and female Striped Snakehead in gonads According to Figure 6, it's evident that there were only a small number of Striped Snakehead observed at the River Station and Floodplain Station that had reached gonadal maturity.According to Selviana et al, (2020), said that Striped Snakehead is not suitable for living in current and shallow waters. Spawning Type of Snakehead Fish Data on the distribution of egg diameter values are presented in Figure 7.The egg diameter data collected in this research aligns with the egg diameter of Striped Snakehead found in the Sambujur River reserve, which falls within the average range of 0.65 to 0.75 mm, as reported by Makmur and Prasetyo (2006).This measurement is larger than the findings of Selviana et al. (2020), who reported an average egg diameter of 0.55 mm for Striped Snakehead eggs in the Sebangau River Floodplain.Saikia's research in 2013 obtained an average egg diameter of 0.34 mm. According to Figure 7, the egg diameter for GML IV cork fish was 35.53% in the range of 0.75 to 0.85 mm and 28.04% in the range of 0.53 to 0.63 mm.Based on the grouping, the graph displays a notable dispersion with two distinct peaks, indicating that the Striped Snakehead in the Floodplain of Lubuk Lampam exhibited a pattern of partial spawning or a prolonged spawning pattern.As stated by Susilawati ( 2012), fish displaying a partial spawning pattern have a lengthy spawning period, often spanning several days.This is evident from the presence of various sizes of eggs in their ovaries.Fish that exhibit a partial spawning pattern typically belong to a category of fish with relatively large egg diameters.(Kartamihardja (2014). Reproductive Potential The fecundity values observed in this study for Striped Snakehead ranged from 5,859 to 30,321 eggs.(Table 3).The fluctuating fecundity of Striped Snakehead in this study may be attributed to the variability in the age of the fish sampled.Generally, younger fish that are spawning for the first time tend to have lower fecundity compared to relatively older fish that have experienced multiple spawning events.Furthermore, fluctuations in fecundity can also attributed to by differences in the size of the collected fish as larger fish typically Reproductive Biology of,…Khoirul F, Etty R and Hurip P,…Sainmatika,…Volume 20,…No.2,…December 2023,…102- Figure 1.Map of the striped Snakehead Fish Research Area (Channa striata, Bloch, 1793) in Lubuk Lampam, South Sumatra. to Steel & Torrie (1980) in Karmon (2011), sex ratio can be calculated using the formula: X=J/B Description = X = Sex ratio J = Number of male fish (tail) B = Number of female fish (tail)  Spawning Season Spawning season can be determined by plotting the Gonad Maturity Level (GML) and Gonad Maturity Index (GMI) values.To determine the percentage analysis for each Gonad Maturity Level (% GML) using the formula employed by Wudneh.(1998): ∑ Description: MSi % = percentage of gonad maturity at grade iReproductiveBiology of,…Khoirul F, Etty R and Hurip P,…Sainmatika,…Volume 20,…No.2,…December2023,…102-113 106 p-ISSN 1829 586X e-ISSN 2581-0170 MSi = number of gonadal mature fish at i level ∑ = number of fish from all stages of gonad maturityTo calculate the Gonad Maturity Index (GMI) and the proportion of the percentage of the number of fish for each level of gonadal maturity, the formula used byWudneh (1998) and Bandpel, et al. (2011) was applied: Figure 3 . Figure 3. Gonadal maturity level (GML) of female fish Figure 4.Meanwhile, data on the average GMI of male and female Striped Snakehead categorized by the month of observation, can be observed in Figure5.Based on Figure4, it is evident that GML IV (gonad maturity level) for both male and female Striped Snakehead is most frequently observed in December and January.Similarly, Figure5indicates that the average Gonad Maturity Index (GMI) falls within the range of 7.52% to 9.3% for females and 3.05% to 4.22% for males.The gonad maturity level of Striped Snakehead (GML I -GML V) is always obtained every month, this indicates that Striped Snakehead can spawn throughout the year (Figure4).According toTamsil (2016), the consistent presence of fish at GML V every month not only suggests that these fish are capable of spawning on a monthly basis but also indicates that spawning occurs throughout the year.In December, the highest Gonad Maturity Index (GMI) values were recorded for female Striped Snakehead at 9.3% and for male Striped Snakehead at 4.22%.In contrast, GMI values for the other months exhibited relatively fluctuating patterns.Based on GML IV and GMI data (Figures4 and 5) for both male and female Striped Snakehead, it is suspected that the spawning season for Striped Snakehead occurs in December -January and peaks in December when the rainy season begins.According toSelviana et al. (2020), the peak of the Striped Snakehead spawning in the Sebangau River Floodplain flow occurs in October (rainy).Makmur & Prasetyo (2006), Striped Snakehead fish the waters of the Sambujur River sanctuary spawn all year round with spawning peaks in the rainy season, namely October -December and have GMI values ranging from 0.01 -4.83%.Based onPrianto et al. (2015), most of the fish in flooded swamps spawn during the rainy season.According toWelcome (1985), in tropical river Figure 4 . Figure 4. Gonad Maturity Level (GML) of Striped Snakehead (a) female and (b) male based on the month of observation. Figure 5 . Figure 5. Average Male and Female GMI Figure 6 . Figure 6.Number of Gonad Ripe Striped Snakehead (GML IV) at each Observation Station Figure 7 . Figure 7. Distribution of Striped Snakehead Egg Diameters by Size Group Table 3 . 113 110 p-ISSN 1829 586X e-ISSN 2581-0170 have a higher fecundity compared to smaller ones.The research conducted by Sangedighi and Umnoumoh (2011) reported a fecundity range of 1,813 to 18,195 eggs for Striped Snakehead.Ferdausi et al. (2015) discovered that the highest fecundity for C. striata was recorded in June, with 22,783 eggs, while the lowest fecundity was observed in September, with 6,158 eggs.Main et al. (2017) obtained fecundity ranging from 2,538 -23,987 eggs.Selviana (2020) found a fecundity ranging from 4,341-35,507.Based on Harianti's research data (2013), the fecundity of Striped Snakehead in Lake Tempe, Wajo Regency ranges from 1,062 -27,200 eggs.Based on the the findings of this study and literature data, it was revealed that the reproductive potential of Striped Snakehead in the Floadplain of Lubuk Lampam is quite high due to its considerably high fecundity value.Fecundity value of Striped Snakehead from December 2022 -April 2023. in the Floodplain of Lubuk Lampam, South Sumatra, has a sex ratio of 1:1.7 (females).The size at first maturity of female Striped Snakehead is 28.5 cm and male Striped Snakehead is 29.30cm.The peak spawning season for Striped Snakehead occurs in December coinciding with the onset of the rainy season.The spawning area for Striped Snakehead is located at the lebung station in the upper sections of the Floodplain Lubuk Lampam.The Striped Snakehead exhibits a partial spawning pattern, characterized by an extended duration of the spawning process.Striped Snakehead has a fairly high reproductive potential with fecundity ranging from 5,859 -30,321 eggs.
2023-12-13T16:02:44.739Z
2023-10-20T00:00:00.000
{ "year": 2023, "sha1": "63cc33a3f435f0c1d7976f62ab1d6b662027b4f7", "oa_license": "CCBYNCSA", "oa_url": "https://jurnal.univpgri-palembang.ac.id/index.php/sainmatika/article/download/12828/7716", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e209eef4af8d580f4a5fa965cc591153dba978ec", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
232140965
pes2o/s2orc
v3-fos-license
Post-translational modification enzymes as key regulators of ciliary protein trafficking Abstract Primary cilia are evolutionarily conserved microtubule-based organelles that protrude from the surface of almost all cell types and decode a variety of extracellular stimuli. Ciliary dysfunction causes human diseases named ciliopathies, which span a wide range of symptoms, such as developmental and sensory abnormalities. The assembly, disassembly, maintenance and function of cilia rely on protein transport systems including intraflagellar transport (IFT) and lipidated protein intraflagellar targeting (LIFT). IFT is coordinated by three multisubunit protein complexes with molecular motors along the ciliary axoneme, while LIFT is mediated by specific chaperones that directly recognize lipid chains. Recently, it has become clear that several post-translational modification enzymes play crucial roles in the regulation of IFT and LIFT. Here, we review our current understanding of the roles of these post-translational modification enzymes in the regulation of ciliary protein trafficking as well as their regulatory mechanisms, physiological significance and involvement in human diseases. Serine-Threonine Kinases MAK and ICK in the Regulation of Ciliary Protein Trafficking Various serine-threonine kinases play crucial roles in many aspects of cilia assembly, disassembly, maintenance and function. Two serine-threonine kinases from a branch of CDK/MAPK/GSK3/CLK (CMGC) kinases, male germ cell-associated kinase (MAK) and intestinal cell kinase (ICK), also known as ciliogenesisassociated kinase 1 (CILK1), are proposed to serve as regulators of IFT turnaround at the ciliary tip and ciliary length in mammalian cells (Fig. 1). Since IFT plays a key role in ciliary length control, regulation of IFT is thought to be important for the ciliary length regulation (17). As the names imply, Mak and Ick were originally identified in male germ cells and intestinal crypt cells, respectively (18,19). MAK and ICK are evolutionarily conserved mitogen-activating protein kinase-like kinases that exhibit high homology, especially in their catalytic domains (19)(20)(21). Mak is predominantly expressed in the retina and testis whereas Ick is ubiquitously expressed among tissues (22). In contrast to the distinct tissue distribution patterns, these protein kinases show a similar subcellular localization. MAK localizes to the distal region of ciliary axonemes in retinal photoreceptor cells (23). ICK localizes mainly to the ciliary tip, which is mediated through the anterograde trafficking by 25). Caenorhabditis elegans DYF-5, an orthologue of MAK and ICK, is an IFT cargo molecule transported to the distal segments of sensory cilia (26). In addition, ICK is enriched in ciliary vesicles released from the tip of cilia (27). Mak-deficient mice exhibit elongated photoreceptor ciliary axonemes with the accumulation of IFT88, an IFT-B component, at the distal portion (23). Loss-of-function of ICK causes shortened or elongated cilia, impaired Hedgehog signalling and accumulation of IFT-B, IFT-A and BBSome components at ciliary tips, while ICK overexpression induces accumulation of IFT-B, but not IFT-A and BBSome components at the tip of cilia, suggesting roles of ICK in disassembly of IFT trains in the turnaround process (24, 25, 28-32). Chlamydomonas reinhardtii (Chlamydomonas) LF4, Tetrahymena LF4A, Leishmania mexicana LmxMPK9, and C. elegans DYF-5, orthologues of MAK and ICK, are also involved in the regulation of IFT as well as cilia/flagella length and formation (26, 33-37). ICK phosphorylates the C-terminal portion of KIF3A, a subunit of kinesin-2, including Thr674 (24, 38). KIF3A Thr674 is positioned in a consensus amino acid sequence for MAK and ICK phosphorylation that is evolutionarily conserved (39). Localization of KIF3A phosphorylated at Thr674 to ciliary tips is observed in mouse embryonic fibroblasts, which is attenuated by Ick deficiency. Inhibition of phosphorylation on serine/threonine residues including Thr674 at the KIF3A C-terminal portion perturbs cilia formation in cultured cells and zebrafish (24). Mouse embryonic fibroblasts carrying a Thr-to-Ala mutation at residue 674 on KIF3A (KIF3A T674A) show slightly elongated cilia without affecting ciliary localization of IFT88 (40). These observations suggest that ICK phosphorylates other substrate protein(s) in addition to the KIF3A C-terminal portion including Thr674 to regulate IFT and ciliary length. In Chlamydomonas, phosphorylation of the kinesin-2 motor subunit FLA8, an orthologue of KIF3B, at Ser663 is required for IFT turnaround at the flagellar tip (41). Intriguingly, FLA8 Ser663 is located in a consensus amino acid sequence for phosphorylation by MAK and ICK that is evolutionarily conserved among species, suggesting that IFT turnaround at the ciliary tip is mediated by phosphorylation of KIF3B in addition to KIF3A by MAK and ICK in mammals. C. elegans hypomorphic mutants of bbs-1, a gene encoding a BBSome component, show accumulation of IFT-B but not IFT-A components at the ciliary tip, as similarly observed in ICK overexpressing cells (14). Given that the BBSome assembles IFT trains at the ciliary base, disassembly and reassembly of IFT trains at the ciliary tip may be mediated by ICK, as well as probably MAK, and the BBSome, respectively. MAK and ICK in Development, Physiology and Disease Mak-deficient mice are viable and fertile without obvious developmental abnormalities, but exhibit progressive retinal photoreceptor degeneration (23). Interestingly, mutations in human MAK gene were discovered in patients with autosomal recessive retinitis pigmentosa (RP), a retinal degenerative disease (42,43). In contrast, Ick-deficient mice exhibit neonatal lethality accompanied with developmental abnormalities observed in multiple organ systems including the bone, lung, kidney, brain, retina, and inner ear (44). In humans, two homozygous loss-of-function mutations in ICK gene, R272Q and G120C, are associated with endocrine-cerebroosteodysplasia (ECO) syndrome, an autosomal recessive ciliopathy characterized by neonatal lethality with multiple defects involving the endocrine, cerebral, and skeletal systems (31, 45). Another homozygous loss-offunction mutation in human ICK gene, E80K, is associated with short rib-polydactyly syndrome (SRPS), an autosomal recessive ciliopathy showing perinatal lethality with short ribs, shortened and hypoplastic long bones, polydactyly and multiorgan system abnormalities (30). In addition, heterozygous variants in human ICK gene are linked to juvenile myoclonic epilepsy (46). Of them, four strongly linked variants, K220E, K305T, A615T and R632X, affect ICK functions in cilia formation and impair mitosis, cell-cycle exit and radial neuroblast migration while promoting apoptosis in the cerebral cortex (46,47). In contrast to Ick-deficient mice, homozygous KIF3A T674A knock-in mice exhibit mildly reduced alveolar airspace in the lung, but are viable without gross abnormalities in the bone and brain, suggesting that other substrate(s) in addition to KIF3A Thr674 are phosphorylated by ICK in vivo. Regulatory Mechanisms of MAK and ICK Activities MAK and ICK are activated by phosphorylation at the TDY motif in their kinase domain by cell cyclerelated kinase (CCRK) in vitro (39). ICK phosphorylation by CCRK negatively regulates ciliogenesis in cultured cells (48). Similar to loss of Mak or Ick, Ccrk deficiency causes dysregulation of cilia length and accumulation of IFT88 at ciliary tips (49). Ccrk-deficient mice show multiple developmental abnormalities associated with dysregulation of Hedgehog signaling, including polydactyly, neural tube patterning defects and malformation of the lung and eye (49)(50)(51). CCRK orthologues, Chlamydomonas LF2 and C. elegans DYF-18, also participates in the IFT regulation as well as cilia/flagella length and formation control (26,36,52,53). LF4 phosphorylation at the TDY motif is diminished in the Chlamydomonas lf-2 mutant, suggesting that the LF2-LF4 signaling axis is evolutionarily conserved among species (54). In contrast to CCRK, fibroblast growth factor (FGF) signalling is thought to negatively regulate the ICK activity. Inactivation of Fgf receptor 1 (Fgfr1) or its ligands results in shortened cilia in zebrafish and Xenopus (55). On the other hand, FGFR3 activation shortened cilia and perturbed the localization of IFT20, an IFT-B component, to cilia in mammals (56,57). Biochemical experiments show that FGFRs interact with, phosphorylate and inactivate ICK. In cultured cells, FGF treatment modulates cilia length through ICK (58). Lipidated Protein Intraflagellar Targeting in Retinal Photoreceptor Cells In parallel to IFT, lipidated protein intraflagellar targeting (LIFT) plays crucial roles in establishing a dynamic ciliary signaling compartment (59). Transport of some lipidated proteins into cilia relies on specific chaperones uncoordinated 119 (UNC119) and phosphodiesterase 6d (PDE6d), which directly recognize lipid chains in the cytoplasmic region and unload their cargoes inside cilia upon interacting with the small ARFlike GTPase protein ARL3 bound to GTP (60)(61)(62)(63). ARL3 is converted from its inactive GDP-bound state to the active GTP-bound state inside cilia by ARL13B, a guanine nucleotide exchange factor (GEF) constitutively localized to cilia (64). In contrast, retinitis pigmentosa 2 (RP2), a GTPase-activating protein (GAP) localized at the base of cilia, is predicted to keep ARL3 in its GDP-bound state outside cilia (65,66). Molecular mechanisms and physiological roles of LIFT have been well studied in retinal rod and cone photoreceptor cells. Rod photoreceptor cells are sensitive to light and are responsible for low light vision, while cone photoreceptor cells operate at a brighter range of light intensities and are responsible for highresolution daylight and colour vision. Subcellular localization of rod transducin, a heterotrimeric G protein that is a mediator of the phototransduction cascade, changes responding to ambient light (67)(68)(69)(70)(71). The inner part and outer segment of the photoreceptor cell are connected by a connecting cilium. Rod transducin is transported to the outer segment from the inner part through the connecting cilium, and then concentrated in the outer segment under dark-adapted conditions. After light reception, rod transducin translocates from the outer segment to the inner part through the connecting cilium. This light-and darkdependent translocation of transducin modulates photosensitivity in rod photoreceptors, thereby contributing to light and dark adaptation. Localization of the a-subunit of rod transducin (rTa) to the outer segment is required for normal light sensitivity of rod photoreceptors (72). In contrast, light-dependent rTa translocation from the outer segment to the inner part protects the retina from the light-induced damage (73)(74)(75)(76). UNC119 interacts with the acylated N termini of rTa and suppresses rhodopsin-mediated transducin activation (63,77). The dark-triggered transport of rTa to the outer segment is inhibited in the Unc119deficient mouse retina (63). Deletion of unc-119 in C. elegans also blocks G protein trafficking to cilia, suggesting an evolutionarily conserved role of UNC119 (63). Unc119-deficient mice exhibit a slowly progressive retinal degeneration (78). A heterozygous stop codon (K57X) was found in the human UNC119 gene of an individual with late-onset dominant cone dystrophy (79). Transgenic mice carrying the identical mutation show mitochondrial ANT-1-mediated retinal degeneration (79,80). On the other hand, PDE6d, encoded by Pde6d, is a prenyl-binding protein that promotes translocation of the bc-subunit of rod transducin (rTbc) from the inner part to the outer segment. In the Pde6d À/À mouse retina, rTc, known to be farnesylated, mislocalizes to the inner part (81,82). In addition to rTc, several components of the phototransduction cascade are also prenylated in rod and cone photoreceptors. PDE6a and PDE6b, rod cGMP phosphodiesterase catalytic subunits, PDE6a', a cone cGMP phosphodiesterase catalytic subunit and rhodopsin kinase (GRK1) are farnesylated or geranylgeranylated (83)(84)(85). Reduced localization of GRK1 and PDE6a' to the outer segment as well as mislocalization of rod PDE6 subunits to the inner part are observed in Pde6d À/À rod and cone photoreceptors (82). As a consequence, loss of Pde6d results in altered electrophysiological properties of photoreceptor cells and a slowly progressing retinal degeneration (82). Consistent with Unc119 and Pde6d deficiency, Arl3 conditional knockout mice show trafficking defects of lipidated proteins including rTa, rTc, rod PDE6 and GRK1 to outer segments in rod photoreceptors and subsequent retinal degeneration (86). A mutation in the human ARL3 gene is linked to autosomal dominant RP (87). Deletion of Arl13b in the mouse retina leads to mislocalization of rTa and rod PDE6 subunits but faster retinal degeneration than Arl3 deficiency does, suggesting additional functions of ARL13B other than a GEF for ARL3 in rod photoreceptor cells (88,89). In humans, mutations in the ARL13B gene are associated with Joubert syndrome, an autosomal recessive ciliopathy characterized by multiple symptoms including retinal degeneration (90,91). Rp2-deficient mice exhibit defects in trafficking of GRK1 as well as rod and cone PDE6 to the outer segment, and subsequent slowly progressing retinal degeneration (92). Mutations in the human RP2 gene are associated with X-linked RP, macular atrophy and cone-rod dystrophy (93)(94)(95). Regulation of Transducin Translocation During Light-Dark Adaptation by the CUL3-KLHL18 Ubiquitin Ligase The ubiquitin proteasome system is one of the fundamental regulatory tools used by eukaryotic cells. Cullin-RING (really interesting new gene) ubiquitin ligases (CRLs) form one of the largest groups of ubiquitin E3 ligases that regulate diverse cellular pathways (96). Cullin-3 (CUL3) bridges the interaction between the RING protein RBX1 and substrate adaptors which deliver targets for ubiquitination and proteasomal degradation (97). Covalent attachment of the ubiquitin-like protein NEDD8 to Cullin family proteins is required for the activation of the Cullinbased ubiquitin E3 ligases (98). The N-terminus of CUL3 interacts with the Broad Complex, Tramtrack, Bric-a-Brac (BTB) domain of substrate adaptors including the Kelch-like (KLHL) family proteins, which contain one BTB domain, one BTB and C-terminal kelch (BACK) domain, and five to six Kelch repeats ( Fig. 2A) (99). The HUGO Gene Nomenclature Committee (HGNC) presently defines 42 KLHL genes. Recently, Kelch-like 18 (Klhl18), one of the Klhl genes, was found to be predominantly expressed in retinal photoreceptor cells. Klhl18-deficient mice exhibit decreased light responses in rod photoreceptors and rTa mislocalization from the outer segment to the inner part. Loss of Klhl18 or treatment of MLN4924, a small molecule inhibitor of the NEDD8-activating enzyme (NAE) (100), suppresses light-induced retinal degeneration. The CUL3-KLHL18 ubiquitin ligase ubiquitinates and degrades UNC119 in rod photoreceptor cells, preferentially under dark conditions. UNC119 overexpression phenocopies the rTa mislocalization observed in the Klhl18 À/À mouse retina. These observations suggest that CUL3-KLHL18 In rod photoreceptors, intracellular Ca 2þ concentration under dark conditions is higher than that under light conditions. In darkness, CK2-phosphorylated UNC119 is dephosphorylated by calcineurin, a Ca 2þ -dependent phosphatase. CUL3-KLHL18 efficiently ubiquitinates and degrades the dephosphorylated UNC119, thereby facilitating rTa transport from the inner part to the outer segment. In the light, calcineurin-mediated dephosphorylation of CK2-phosphorylated UNC119 is suppressed due to low Ca 2þ concentration. Inefficient ubiquitination and degradation of the phosphorylated UNC119 by CUL3-KLHL18 results in increased amounts of UNC119 and subsequent inhibition of rTa translocation to the outer segment. CaN, calcineurin. Post-translational modification enzymes in ciliary transport modulates rTa translocation during light and dark adaptation through UNC119 ubiquitination and degradation. Furthermore, regulatory mechanisms underlying the CUL3-KLHL18-UNC119 axis were investigated. The phosphorylation level of UNC119 is elevated under light conditions compared with that under dark conditions. UNC119 is phosphorylated by casein kinase 2 (CK2), which is a serine/threonine kinase expressed in retinal photoreceptor cells (101). In contrast, UNC119 is dephosphorylated by Calcineurin, a Ca 2þ -and calmodulin-dependent serine/threonine protein phosphatase (102,103). UNC119 degradation by CUL3-KLHL18 is suppressed and facilitated by UNC119 phosphorylation and dephosphorylation, respectively. Inhibition of CK2 decreases the amount of UNC119 in the retina, while inhibition of Calcineurin increases the retinal expression level of UNC119, causes rTa mislocalization to the inner part of photoreceptors, and protects the retina from light-induced damage. Collectively, these results suggest that CUL3-KLHL18 promotes UNC119 ubiquitination and degradation depending on phosphorylation, thereby regulating light-and dark-dependent rTa translocation in the retina (Fig. 2B) (104). Given that light exposure is a suspected risk factor for the progression of age-related macular degeneration and RP (105-108), inhibition of this signaling pathway may be a potential therapeutic target. Concluding Remarks It has recently become clear that several posttranslational modification enzymes play key roles in the regulation of ciliary protein trafficking. Cumulative evidence of the functional mechanisms of MAK and ICK has unravelled many aspects of the physiological significance of the coordinated IFT turnaround at the ciliary tip and its involvement in human ciliopathies. Identification and functional characterization of the CUL3-KLHL18 ubiquitin ligase in retinal photoreceptor cells put forward our knowledge of a series of regulatory mechanisms from photoreception to light-dark adaptation. However, these findings raise new questions. For example, assuming that IFT trains are disassembled by MAK and ICK at the ciliary tip, how are they reassembled for retrograde IFT? In parallel to the BBSome as mentioned above, serinethreonine phosphatase(s) may also contribute to reassembly of IFT trains at the ciliary tip through dephosphorylating KIF3A and other substrate(s). In addition, although rTbc also shows light-and darkdependent translocation between the outer segment and the inner part of retinal photoreceptor cells, which modulates rod light responses (109), rTbc translocation is independent of CUL3-KLHL18. PDE6d ubiquitination and degradation by an unknown E3 ubiquitin ligase may underpin rTbc translocation during light and dark adaptation. Further analyses are needed to unveil comprehensive regulatory mechanisms underlying ciliary protein trafficking, whose understanding would contribute to develop therapeutic strategies to treat human diseases.
2021-03-09T06:22:57.663Z
2021-03-03T00:00:00.000
{ "year": 2021, "sha1": "83eef84f3b233894aeb73ba978b0e5adbd4374ac", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jb/advance-article-pdf/doi/10.1093/jb/mvab024/37038372/mvab024.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2c492e87a163f45a8f1af9416fe6b9adc4f291b8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250592173
pes2o/s2orc
v3-fos-license
Targeting strategies in the treatment of fumarate hydratase deficient renal cell carcinoma Fumarate hydratase (FH) - deficient renal cell carcinoma (FHdRCC) is a rare aggressive subtype of RCC caused by a germline or sporadic loss-of-function mutation in the FH gene. Here, we summarize how FH deficiency results in the accumulation of fumarate, which in turn leads to activation of hypoxia-inducible factor (HIF) through inhibition of prolyl hydroxylases. HIF promotes tumorigenesis by orchestrating a metabolic switch to glycolysis even under normoxia, a phenomenon well-known as the Warburg effect. HIF activates the transcription of many genes, including vascular endothelial growth factor (VEGF). Crosstalk between HIF and epidermal growth factor receptor (EGFR) has also been described as a tumor-promoting mechanism. In this review we discuss therapeutic options for FHdRCC with a focus on anti-angiogenesis and EGFR-blockade. We also address potential targets that arise within the metabolic escape routes taken by FH-deficient cells for cell growth and survival. uncommon, sporadic and familial cancers, of which about 15% are benign such as oncocytoma, metanephric tumors or angiomyolipoma (2). In ccRCC, chromosome 3p deletions are found in about 70% to 90% (3). Furthermore, inactivation of the VHL gene has been demonstrated in 100% of familial ccRCC and in 57% of sporadic ccRCC (4). Two subtypes of pRCC are known, type I and II, with type I tumors having better survival than type II (5). Chromosomal gain of chromosomes 7 and 17 are the most consistent and characteristic found genetic alteration (6). Originating from distal convoluted tubules, chRCC is characterized by multiple chromosomal losses of chromosomes Y, 1, 2, 6, 10, 13q, 17, and 21 (7). Metabolic reprogramming appears to be a key core aspect in the development and progression of RCC. The VHL gene is an important regulator that is often inactivated and consequently involves HIF activation, which in turn stimulates the reprogramming of several metabolic pathways. Reprogramming of the glucose, fat and amino acid metabolism dysregulate the tricarboxylic acid cycle leading to a subsequent pro-oncogenic cellular environment. Effective inhibitors or drugs reversing reprogrammed metabolic pathways is the basis of new and currently more emerging targeted therapies against RCC (8). RCC metabolism Metabolism of glucose, amino acids and lipids is a key determinant of tumor growth (1 (9). RCC is a metabolically very active tumor. The key factors which are closely linked with tumor microenvironment (TME) metabolic alterations in RCC include HIF, fatty acid synthase (FASN) and pyruvate kinase 2 (PKM2), being potential targets in cancer therapy (10). Glycolysis and hypoxia play a key role in the failure of RCC therapy. ccRCC shows disturbed metabolism of glucose, amino acids and lipids (11,12), driving tumor growth and affecting prognosis (13). Moreover, the significant feature of most ccRCCs is the loss of VHL, which causes HIF accumulation and drives the cellular hypoxic response. Consecutively, significant variations in metabolism have been found in ccRCCs (13,14). In addition, molecular patterns involving metabolic pathways correlated with worse survival in ccRCC, including downregulation of AMP-activated kinase (AMPK) complex and the Krebs cycle genes, upregulation of genes involved in the pentose phosphate pathway and fatty acid synthesis (15). Functional deficiency of succinate dehydrogenase (SDH), resulting in succinate accumulation, is a common feature in up to 80% of ccRCCs (16). Survival analyses of the TCGA-KIRC dataset confirmed poor survival rates in case of lower expression of the SDH subunits SDHB, SDHC and SDHD (16). Notably, RCC tumors show a remarkable metabolic heterogeneity (17,18), however, still little is known how specific cellular components of the TME contribute to therapy response or resistance mechanisms. Modulation of metabolic 'checkpoints' driven by HIF and other factors may provide a new therapeutic strategy to reverse TME features responsible for RCC drug resistance. Metabolic changes including enhanced glycolysis and glutaminolysis as well as increased antioxidant activity have been associated with TKI resistance in RCC (19). The frequency of the metabolic reprogramming may render RCC a suitable disease for the investigation of potential novel therapeutic agents that target tumor metabolism. Fumarate hydratase deficient renal cell carcinoma (FHdRCC) FHdRCC is a rare and aggressive subtype of type 2 papillary RCC, mainly affecting younger patients, metastazing early from small solitary lesions. It is caused by an inactivating mutation of the FH gene with a 15% lifetime risk for FH mutation carriers to develop RCC (20). Conversely, restoration of FH activity has been shown to stop development of FHdRCC and to restore the physiological metabolic phenotype in an animal model (21). FHdRCC can be associated with the hereditary leiomyomatosis and renal cell cancer (HLRCC) syndrome, an autosomaldominant hereditable syndrome with predisposition to develop smooth muscle tumors of the skin and uterus. In the largest case series of HLRCC including 185 patients from 69 families, 12.4% developed RCC resulting in a lifetime risk of 21%. Although RCC occurs only in a minority of cases, it presents highly aggressive with poor survival in symptomatic patients with stage 3 to 4 disease. Thus, renal imaging screening should be offered to these patients resulting in earlier-stage diagnosis of RCC with consecutive survival benefit (20). The WHO lists the subtype 'HLRCC-associated RCC' in their RCC classification, which strictly speaking defines a group of genetically altered types of FHdRCC. The more general term FHdRCC should rather be used to include both hereditary and known sporadic forms of cancer development (22). FHdRCC shows histopathological features such as papillary architecture with tubule cystic growth patterns, abundant eosinophilic cytoplasm, perinucleolar halos and as shown recently, tumor cannibalism and lymphocytic emperipolesis (23). Yet, it still remains difficult to distinguish FHdRCC from papillary RCC by means of solely pathological criteria. Metabolic changes, epigenetic and signalling pathway alterations and therapeutic targets in FHdRCC FH (EC 4.2.1.2) participates in the mitochondrial TCA cycle, which serves the production of cellular energy production in the form of ATP through oxidative phosphorylation (OXPHOS). FH catalyzes the conversion of fumarate to L-malate. Fumarate itself is the product of succinate oxidation by succinate dehydrogenase (16). The TCA cycle is fueled by acetyl-CoA, which condenses with oxaloacetate to form citrate in the first step of the pathway. To keep the TCA cycle going, acetyl-CoA must therefore continuously be generated either by oxidative decarboxylation of pyruvate, by oxidation of long-chain fatty acids, or by oxidative degradation of certain amino acids. When glucose levels are low or TCA cycle intermediates are diverted for biosynthetic purposes, cells use glutaminolysis to maintain the TCA cycle. In this pathway, glutaminase breaks down glutamine to form glutamate, which is further converted to aketoglutarate (a-KG). Of note, glutamine replenishment of the TCA cycle can result in fumarate accumulation even in FHcontaining cells (24). An exciting study by Frezza et al. has addressed the mechanism that allows FH-deficient immortalized kidney cells to survive without a functional TCA cycle. By combining gas chromatography-mass spectrometry (GC-MS), liquid chromatography-mass spectrometry (LC-MS) and a computer model of metabolism, the authors found that FHdeficient cells used glutamine to survive and that accumulating fumarate was mainly glutamine-derived (24-26). In a cataplerotic pathway, FH-deficient cells use the accumulated TCA cycle intermediate succinate to initiate porphyrin biosynthesis via succinyl-CoA. However, the resulting heme, i.e. iron protoporphyrin IX, is immediately degraded again by heme oxygenase (HMOX) and other enzymes. This apparently futile cycle of concomitant haem biosynthesis and degradation, enables FH-deficient cells to generate at least some mitochondrial NADH. The HMOX inhibitor zinc protoporphyrin or HMOX silencing impaired FH-deficient cell growth and colony formation. Likewise, inhibition of haem biosynthesis using hemin, an approved drug used for acute porphyria, reduced colony formation of FH-deficient cells, altogether indicating that cells lacking FH critically depend on this unusual pathway and that haem biosynthesis and degradation pathways may be attractive targets in the treatment of FHdRCC (24, 26). Yet another pathway that increases cellular levels of fumarate is the urea cycle, which converts toxic ammonia to urea for subsequent excretion. Ammonia mainly accumulates during amino acid catabolism. Fumarate is produced in the 4 th step of the cycle, when argininosuccinate is converted to arginine. Intriguingly, FH-deficient cells reverse this step and use fumarate and arginine to form argininosuccinate. Reversal of this metabolic step appears to be critical, since arginine depletion impaired cell proliferation and survival of FH-deficient cells (27). Therefore, arginine deprivation or targeting argininosuccinate synthase should be beneficial in this particular subtype of RCC (28). It is not surprising that the lack of FH changes many cellular functions in FHdRCC. Fumarate and its immediate precursor succinate can act as oncometabolites by competitively inhibiting a-ketoglutarate (a-KG)-dependent dioxygenases, a family of enzymes that also includes prolyl hydroxylases. Suppression of protein prolyl hydroxylation slows down the degradation of HIF (21) and HIF stabilization in turn promotes tumorigenesis through transcriptional activation of pro-angiogenic genes including vascular endothelial growth factor (VEGF). HIF also activates glycolytic genes that contribute to the Warburg effect, a metabolic shift to aerobic glycolysis in normoxia. In addition to prolyl hydroxylase, fumarate and succinate inhibit other a-KGdependent dioxygenases. Accumulating fumarate and succinate for instance suppress histone and DNA demethylases by depletion of a-KG causing genome-wide epigenetic modifications that further contribute to tumorigenesis (29). The mechanism of a-KG inhibition through accumulating fumarate is still debated, literature to date refers to this as a competitive inhibition process (30). For example, the histone de-methylation process is inhibited resulting in hypermethylation at histones H3K9 and H3K27 (31). Thus, transcription of tumor suppressor genes and differentiation genes is inhibited contributing to cell dedifferentiation, tumor progression and drug resistance (31). A TCGA pan-Kidney Cancer Analysis of 843 RCC confirmed distinctive features of each RCC subtype. FHdRCC was characterized by a CpG island methylator phenotype, DNA hypermethylation/CDKN2A alterations and increased immune signature expression for select immune gene signatures, including Th2 gene signature (similar to ccRCC) (32). In detail, poor outcome was linked in all RCC subtypes with hypermethylation of WNT pathway regulatory genes (SFRP1 and DKK1), suggesting that hypermethylation of SFRP1 and DKK1 might be a promising prognostic biomarker in RCC (32). These observations may provide the therapeutic rationale of introducing immune checkpoint inhibitors, CDK4/6 inhibitors (33) and de-methylating agents in patients with FHdRCC. In cancer cells with FH deficiency, fumarate can also react with cysteine residues of proteins in a non-enzymatic manner, generating S-(2-succinyl)cysteine. Such a protein modification, which is known as cysteine succination, eliminates the ability of Kelch-like ECH-associated protein 1 (KEAP1) to repress nuclear factor (erythroid-derived 2)-like 2 (NRF2) (34). Although activation of NRF2, a master regulator of antioxidant responses, in cancer can be beneficial, the fumarate-induced stabilization of NRF2 in HLRCC facilitates tumor growth and survival (34). SMARCC1 is a core member of the tumor suppressing SWI-SNF chromatin remodelling complex and also affected by succination. In ccRCC, SMARCC1 is commonly deleted because of its position on chromosome 3, which is known as a potentially tumor-promoting region in RCC. Using a fumarate-competitive chemoproteomic probe in concert with LC-MS, a recent study has identified a novel FHregulated cysteine residue in SMARCC1 which is subject to succination (35). As a consequence, SWI-SNF complex formation is impaired and thus its tumor-suppressing activity is reduced. In certain tumors, HIF stabilization and accumulation can also be induced by activation of epidermal growth factor receptor (EGFR) (36). EGFR signalling can increase the levels o f H I F u n d e r n o r m o x i c c o n d i t i o n s t h r o u g h t h e phosphoinositide 3-kinase (PI3K)/AKT pathway. EGFR can thus promote the Warburg effect. Consistent with EGFRmediated HIF stabilization, EGFR inhibitors such as erlotinib can decrease the expression of the HIF target VEGF (37). Accordingly, resistance to EGFR inhibitors can be associated with increased levels of VEGF in the tumor microenvironment. Altogether, these observations have led to the concept of combining EGFR and VEGF(R) inhibitors (38). However, there are currently no reports about alterations of EGF signalling in FH-deficient tumors supporting the combined application of EGFR and VEGF(R) inhibitors. The metabolic shift towards glycolysis in FHdRCC has been shown to lower the levels of AMPK (21). AMPK is a highly conserved metabolic sensor that governs cellular adaptation to energy deficiency and environmental stress. AMPK acts as a metabolic tumor suppressor by activating p53 and by regulating mammalian target of rapamycin (mTOR). The attenuation of AMPK in FHdRCC facilitates the activation of mTOR, which then promotes the biosynthetic pathways required for cell proliferation. Epidemiological studies indicate that the incidence of cancer is reduced in type 2 diabetes treated with t h e A M P K a c t i v a t o r m e t f o r m i n ( 3 9 ) . T h e r e f o r e , pharmacological re-activation of AMPK using metformin would in principle also be a promising approach in the treatment of FHdRCC. However, metformin-induced AMPK activation occurs via inhibition of respiratory chain Complex I. By inhibiting mitochondrial OXPHOS, ATP is depleted, ultimately resulting in AMPK activation (40). Considering that FH deficiency itself causes a strong suppression of the mitochondrial respiration, it is currently unclear whether treatment with metformin would indeed be beneficial (26,41). Clinical trials in HLRCC and FHdRCC In metastatic FHdRCC, therapy regimens with immune checkpoint inhibitors (ICI), mTOR inhibitors, multi-target tyrosine kinase inhibitors (TKI) and various combinations have been tested in the past. PD-1/PD-L1 expression in tumor cells and tumor-infiltrating lymphocytes have been found only in a small proportion of FHdRCC cases. Since ICIs failed to induce satisfactory response rates, they should only be offered to patients with a PD-L1 positive tumor (42,43). Dual inhibition of mTOR and VEGF was reported with ORR rates of up to 44%, whereas patients treated with mTOR inhibition alone showed no response (43,44). TKIs were superior to mTOR inhibitors or ICIs alone, presenting with an ORR of up to 64% and a time to progression of 11.6 months (44). Cabozantinib, which is approved for metastatic RCC, might be the preferred TKI in HLRCC (45). In the AVATAR trial (NCT01130519), the combination of bevacizumab and erlotinib has shown promising preliminary results in HLRCC patients (46). In comparison to sporadic papillary RCC, the HLRCC cohort benefitted with an ORR of 72% and a median PFS of 21.1 months versus 35% and 8.8 months, respectively (46). Findings from a first-line setting of this combination revealed that FHdRCC patients treated with bevacizumab and erlotinib showed an ORR of 50% with a median PFS of 13.3 months and an impressive disease control rate of 90% (47). Pharmacological re-activation of AMPK using metformin may also be a promising approach in the treatment of FHdRCC. The combination of the VEGF/EGFR inhibitor vandetanib with metformin has already been tested in a phase II study (NCT02495103) but was not continued due to the lack of vandetanib availability. All registered studies in patients with FHdRCC are summarized in Table 1. Outlook The future agenda may involve targeting of cancer cell metabolism at the level of glutaminolysis (48), to slow down TCA cycle activity and to avoid fumarate accumulation ( Figure 1). The strict arginine dependence of FH-deficient cells also suggests arginine deprivation as an appropriate therapeutic approach (27). Moreover, OXPHOS, which generates ATP in a TCA cycle dependent manner, can be targeted using drugs such as metformin or possibly also atovaquone, and arsenic trioxide (50) to revert the tumorigenic metabolism through AMPK activation (50), provided that side effects are manageable. Gene therapy-based replacement of defective enzymes in target tissues, which is a current effort in the treatment of metabolic diseases (51), may be an ultimate goal help to improve the therapeutic opportunities of FH-deficient Metabolic changes, signalling pathway alterations and therapeutic targets in FHdRCC. (A) Metabolic changes in FH deficiency: in the glycolytic pathway, glucose is converted to pyruvate by glycolysis (GLY), which can enter the mitochondria and fuel the tricarboxylic acid cycle (TCA) cycle, also known as citrate as well as Krebs cycle. Succinate dehydrogenase (SDH) generates fumarate and fumarate hydratase (FH) catalyzes the stereospecific hydration of fumarate to form L-malate. Glutaminase (GLS) breaks down glutamine to form glutamate, which is further converted to a-ketoglutarate and feeds the TCA cycle. In the urea cycle, argininosuccinate is cleaved by argininosuccinase (ASL), producing additional fumarate. This step can be reversed by the argininosuccinase synthase (ASS) if argininosuccinate is required (27). In FH deficiency, fumarate accumulating in the mitochondria can leak out to the cytosol and become an 'oncometabolite'. (B) Fumarate-induced activation of signalling pathways: cytosolic fumarate, like succinate, inhibits a family of prolyl hydroxylases (PHDs), which under normoxia destabilize hypoxiainducible factor (HIF) through hydroxylation of prolyl residues. Fumarate (and succinate)-induced PHD inhibition causes HIF-1a accumulation. In the nucleus, HIF-1a activates the transcription of target genes including vascular endothelial growth factor (VEGF) and glycolytic genes, initiating the metabolic shift known as the Warburg effect (left panel). Fumarate and succinate accumulation can also act as a-ketoglutarate (a-KG) antagonist, inhibiting a-KG-dependent dioxygenases. Thus, histone de-methylation process catalyzed by histone demethylases (KDM) is inhibited. Consecutively, epigenetic alterations including hypermethylation at histone markers H3K9 and HRK27 inhibit tumor suppressor genes resulting in tumor progression, drug resistance and cell dedifferentiation (31, mid panel). In a non-enzymatic process, high concentrations of fumarate can also lead to the succination of cysteine residues of Kelch-like ECH-associated protein 1 (KEAP1), which thus loses its ability to prevent nuclear factor (erythroid-derived 2)-like 2 (NRF2)-mediated antioxidant responses (right panel). In FHdRCC, NRF2 activation is protumorigenic. (C) Targeting strategies for FHdRCC: fumarate-induced HIF activation promotes tumor angiogenesis and proliferation through VEGF signalling, suggesting the use of bevacizumab to neutralize VEGF. Activation of the epidermal growth factor receptor (EGFR), a frequent event in many tumors, can also activate HIF-1a, and HIF-1a induced VEGF can contribute to the resistance against EGFR inhibitor such as erlotinib. This has led to bevacizumab plus erlotinib combination therapy in certain cancer types including FHdRCC. The metabolic shift arising from FH deficiency results in decreased levels of adenosine monophosphate (AMP)-activated protein kinase (AMPK) and, as a consequence, of the tumor suppressor p53. Raising the activity of AMPK again may therefore also be desirable in FHdRCC, which can be achieved using metformin, an indirect AMPK activator (49). In addition, targeting the metabolic escape routes of FH-deficient cells through inhibitors of heme biosynthesis and degradation would be attractive in the treatment of FHdRCC (26). Given the strict arginine dependence of FH-deficient tumor cells arginine deprivation (27) and de-methylating agents (32) according to DNA CGI hypermethylation phenotype might be a therapysupporting concept. RCC. Moreover, targeting the epigenetic machinery to decrease the oncogenic impact of metabolic changes may be effective also in FHdRCC patients (31). Take home messages In FHdRCC, the oncometabolite fumarate activates HIF-1a and promotes the tumorigenic Warburg effect. Anti-VEGF antibody bevacizumab plus EGFR inhibitor erlotinib showed response rates up to 72%. Targeting tumor metabolism aimed at AMPK re-activation may further improve FHdRCC therapy. Given the metabolic escape routes in FH-deficient cells, inhibition of haem biosynthesis and/or degradation as well as arginine deprivation offer novel therapeutic opportunities in FHdRCC. Additionally, DNA CpG islands (CGI) hypermethylation status might be a cornerstone of introducing de-methylating agents in the therapeutic landscape of FHdRCC. Conclusions In conclusion, FHdRCC is a genetically or sporadically acquired aggressive variant of type 2 papillary kidney cancer that affects younger patients. Understanding the metabolic changes and signalling pathway alterations caused by FH deficiency may facilitate the development of more effective therapies for this aggressive and mostly fatal disease. Since the incidence of FHdRCC is low, it does not seem justified to screen all RCC patients for genetic abnormalities in the FH gene. However, it is important to raise awareness for clinical correlations and morphological signs to initiate further molecular and germline analysis. Long-lasting partial responses and manageable low-grade toxicities suggest that the combination therapy with bevacizumab and erlotinib should be considered as standard first-line therapy in locally advanced/ metastatic FHdRCC.
2022-07-17T15:18:28.675Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "061881d1a497b26b1ed85fb29509f8d598b4bd8f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.906014/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93e2986c5dafa335fb719e91933aac20402b3627", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216033086
pes2o/s2orc
v3-fos-license
Cytokine network analysis of immune responses before and after autologous dendritic cell and tumor cell vaccine immunotherapies in a randomized trial. Background In a randomized phase II trial conducted in patients with metastatic melanoma, patient-specific autologous dendritic cell vaccines (DCV) were associated with longer survival than autologous tumor cell vaccines (TCV). Both vaccines presented antigens from cell-renewing autologous tumor cells. The current analysis was performed to better understand the immune responses induced by these vaccines, and their association with survival. Methods 110 proteomic markers were measured at a week-0 baseline, 1 week before the first of 3 weekly vaccine injections, and at week-4, 1 week after the third injection. Data was presented as a deviation from normal controls. A two-component principal component (PC) statistical analysis and discriminant analysis were performed on this data set for all patients and for each treatment cohort. Results At baseline PC-1 contained 64.4% of the variance and included the majority of cytokines associated with Th1 and Th2 responses, which positively correlated with beta-2-microglobulin (B2M), programmed death protein-1 (PD-1) and transforming growth factor beta (TGFβ1). Results were similar at baseline for both treatment cohorts. After three injections, DCV-treated patients showed correlative grouping among Th1/Th17 cytokines on PC-1, with an inverse correlation with B2M, FAS, and IL-18, and correlations among immunoglobulins in PC-2. TCV-treated patients showed a positive correlation on PC-1 among most of the cytokines and tumor markers B2M and FAS receptor. There were also correlative changes of IL12p40 with both Th1 and Th2 cytokines and TGFβ1. Discriminant analysis provided additional evidence that DCV was associated with innate, Th1/Th17, and Th2 responses while TCV was only associated with innate and Th2 responses. Conclusions These analyses confirm that DCV induced a different immune response than that induced by TCV, and these immune responses were associated with improved survival. Trial registration Clinical trials.gov NCT004936930 retrospectively registered 28 July 2009 melanoma [5]. Clinical studies utilizing autologous dendritic cells loaded with antigens from autologous tumor cells have been especially promising [6][7][8][9]. A randomized phase II trial tested two vaccines featuring autologous tumor antigens (ATA): injections of autologous dendritic cells loaded ex vivo with antigens from autologous tumor cell lines (DCV), and tumor cell vaccines (TCV) consisting of irradiated autologous proliferating tumor cells [8,9]. An early analysis showed that DCV was associated with better survival [8], and this was confirmed when 5-year follow up showed a more than doubling of median survival and 3-year survival rate, and a 70% reduction in the risk of death [9]. This DCV approach is currently being tested in phase II trials in glioblastoma and ovarian cancer, and in a phase IB trial in combination with monoclonal antibodies to programmed death-1 protein (PD-1) in melanoma patients. One of the objectives of vaccine clinical trials is to increase understanding of the immune responses that are induced or enhanced by such vaccines and correlating these responses with survival. In the randomized phase 2 analysis of 110 cytokines showed that variation in specific cytokine groupings were similar at baseline but differed markedly following three vaccinations [9]. A combination of baseline soluble programmed death protein-1 (sPD-1) and changes in sPD-1 after three injections was strongly predictive of 3-year survival in DCV-treated patients, but not TCV-treated [10]. The current analysis was conducted in an effort to better understand the nature of these immune responses in terms of classical concepts of innate and adaptive immune responses as reflected by changes in cytokines in response to vaccine therapy [58]. Serum samples Blood samples from melanoma patients enrolled in a randomized phase 2 trial (clinicaltrials.gov NCT00436930) were obtained at week-0, 1 week before the first of 3 weekly injections of DCV or TCV vaccines, and week-4, 1 week after the third weekly injection [8,9]. The trial was approved by the Western Institutional Review Board (Seattle, WA., WIRB ® Protocol #20090753). Patients gave written informed consent for randomization to DCV or TCV, and blood collection and analysis. The protocol and manufacturing procedures were reviewed by the US Food and Drug Administration (BB-IND 5838 and BB-IND 8554). TCV consisted of irradiated autologous tumor cells from a short-term cell line; DCV consisted of autologous dendritic cells incubated with irradiated autologous tumor cells [8,9]. Both DCV and TCV were admixed in granulocyte-macrophage colony stimulating factor (GM-CSF) just prior to subcutaneous injections scheduled for weeks 1, 2, 3, 8, 12, 16, 20, and 24. Analysis of serum markers Cryopreserved 200-microliter serum samples from week-0 and week-4 were analyzed for 110 cytokines, growth factors, proteases, soluble receptors and other proteins as shown in a supplementary table of a previous publication [9], using a quantitative, multiplex enzymelinked immunosorbent assay (Quantibody ® Cytokine Array, Raybiotech, Inc., Norcross, GA.). Values were expressed as absolute concentration (pg/mL) and as percentage differences above or below the mean value from three normal controls. Principal component analysis Principal component analysis (PCA) was performed using IBM SPSS Statistics V26. PCA transforms data into a coordinate system by creating new uncorrelated variables that successively maximize variance [59,60]. The most important use of PCA is to represent a multivariate data table as smaller sets of variables (summary indices) in order to observe trends, jumps, clusters and outliers. PCA is valuable for exploratory analysis of extensive data and especially useful for integrating genomics and proteomic datasets in effort to understand biological processes [61][62][63]. In a two-component PCA model variables are distributed in a two-dimensional plane so that the largest amount of variance is grouped in principal component-1 (PC-1) along one axis and the next highest is grouped in principal component-2 (PC-2) along an orthogonal axis. The distribution is based on correlation coefficients that vary from 0 to 1 or 0 to − 1 from the origin, which quantitates the strength of positive or negative correlation. Positively correlated variables cluster in the same side of at least one of the components (influential), or in the same quadrant (correlated). Negatively correlated variables are positioned on opposite sides of the origin in diagonally opposed quadrants. After plotting, the component axes can be rotated for optimal delineation of the groups, following established procedures (e.g. Varimax, Quartimax, Promax, etc.) that give a best fit among variables. The rotation changes only the positioning vector of the components to each-other for easier interpretation. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) was used to indicate the proportion of variance that might be attributed to specific variables. High values (close to 1.0) indicate that a factor analysis of the data will be useful as opposed to values less than 0.50. Bartlett's Test of Sphericity (BTS) was used to test the hypothesis that the correlation matrix is an identity matrix, which would indicate that variables are unrelated and therefore unsuitable for structure detection. Lower significance levels (less than 0.05) indicate that a factor analysis of the data may be useful. Discriminate analysis Discriminant analysis (DA) was used to interpret multiple discriminant functions arising from analyses involving more than two groups and more than one variable [64]. DA is very useful for detecting variables that discriminate between different groups and can classify cases into different groups with a better than chance accuracy. It is especially useful for interpreting large data bases such as result from proteomics [65][66][67]. The objective is to develop discriminant functions that are the linear combination of independent variables (i.e. change of cytokines) that will discriminate between the categories of the dependent variable (i.e. survival groups). It enables one to examine whether significant differences exist among the groups based on the predictor variables and evaluates the accuracy of the classification. Several variables are included in order to see which one(s) contribute to discriminating between groups, and then a matrix of total variances and covariances is created as well as a matrix of pooled withingroup variances and covariances. Matrices are compared via multivariate F tests to determine whether there are significant differences (with regard to all variables) between groups. This procedure is identical to multivariate analysis of variance, or MANOVA. In stepwise discriminant function analysis, a model of discrimination is built step-by-step. At each step all variables are reviewed and evaluated to determine which one contributes most to discriminating between groups. That variable is included in the model, and the process starts again. The stepwise procedure is "guided" by F-to-enter and F-to-remove values for statistical-based discrimination between groups; i.e., it is a measure of the extent to which a variable makes a unique contribution to the prediction of group membership. When performing a multiple group discriminant analysis, an optimal combination of variables is automatically determined so that the first function provides the most discrimination between groups, the second provides second most, and so on. The larger the standardized coefficient, the greater the variable's contribution to discrimination. Another way to determine which variables define a particular discriminant function is to examine the factor structure coefficients, which are the correlations between variables in the model and the discriminant functions. A Bonferroni correction of p values was used in analyses that included multiple comparisons. Principal cytokine pathway analysis The pathway analysis considered the findings from both PCA and DA. The methods to investigate the effect of these pathways on survival included linear regression analysis and tests of equality of group means (univariate ANOVA). In order to interpret multiple discriminant functions arising from analyses with more than two groups and more than one variable, we first tested the different functions for statistical significance, and only considered significant functions for further examination. Next, we looked at the standardized b coefficients for each variable for each significant function. The larger the standardized b coefficient, the larger is the respective variable's unique contribution to the discrimination specified by the respective discriminant function. The classification matrix was used to determine how well the current classification functions predict group membership as defined by survival. The classification matrix shows the number of cases that were correctly classified (on the diagonal of the matrix) and those that were misclassified. To perform the discriminant analysis each treatment arm was sub-divided into three categories: survivors over 60 months, and the remaining patients split at the median survival for each treatment arm. Results Paired week-0 and week-4 blood samples were available for 22 of 24 TCV-treated patients and 17 of 18 DCV-treated patients [9]. The two missing TCV-treated patients did not have a week-4 sample because of rapidly progressing metastatic disease and both died within 2 months of enrollment. The missing DCV-treated patient was alive at 5 years but had rescinded permission to study his blood samples. Principal component analysis at baseline The PCA of cytokines and immunoglobulins (Ig) at baseline are shown in Fig. 1. Before vaccine treatment was initiated, the distribution of PCA values for TCV-treated and DCV-treated patients were similar. Deviation from normal values for all 39 patients defined two distinctive groups based on the loading plot. The correlative group of cytokines on component 1 (PC1) accounted for 64.4% of variance and a correlative group of immunoglobulins (PC2) accounted for 7.5% of variance (Additional file 1). As shown in Table 1, the majority of the cytokines with correlations greater than 0.5 are those classically associated with Th1, and Th2 immune responses. This collection of cytokines positively correlated with B2M, PD1 and TGFβ1 suggesting coexistence of pre-existing tumor-associated inflammation and immunosuppression. Figure 2 shows the PCA of cytokine changes after 3 weekly vaccine injections. The change from baseline for each cytokine was calculated by subtracting the baseline value from the week-4 value and dividing the difference by the smaller of the baseline or the week-4 value to avoid negative numbers when week-4 levels were lower than baseline values. Using this method, a positive or negative correlation vector was calculated for each variable. Cytokine changes after 3 TCV injections In TCV-treated patients, PCA showed the contribution of multiple factors with the combination of PC1 and PC2 responsible for only 54% of variance (Fig. 2a, Additional file 2). There was a positive correlation on PC1 for most of the cytokines suggesting the evolution of tumor-associated inflammation, and on PC2 for tumor markers (B2M, FAS receptor) along with Th2-associated cytokines (Fig. 2a, Table 2). Changes in IL12 correlated with both Th1-and Th2-cytokines suggesting an adaptive response mediated by antigen presenting cells (APC) that had been suppressed by TGFβ1. Thus, after three injections, the cytokine changes in the TCV group were most consistent with a Th2 adaptive response and an innate response. PCA limited to factors with component values above 0.5 (Table 3), revealed a significant association of the variables KMO = 0.701, BTS < 0.001 (Additional file 3). The first two components contributed equally (40.1% and 39.5% of the variance respectively (Additional file 4), thus organizing the variables into two groups. One group contains variables associated with IL6, IL15, and IFNγ, but not associated with the APC-associated cytokine IL12p70, while the other group contains markers associated with APCs. This suggests an adaptive Th2 response mediated through immunoglobulins, and possibly a suppressed Th17/Th1 response based on the association with TGFβ1. Survival correlated negatively with the suppressed Th1/Innate response, but slightly positively with the increased immunoglobulins, suggesting dominance of a Th2 response in TCV-treated patients. TCV-treated patients exhibited changes in Th2 cytokines, suggesting that antigens on the injected irradiated tumor cells primarily elicited a Th2 response mediated by endogenous APCs. Cytokine changes after 3 DCV injections In DCV-treated patients there was a correlation between Th1/Th17 cytokines on PC1 and immunoglobulins on PC2 ( Fig. 2b) with PC1 and PC2 accounting for 39.1% and 12.6% of variance respectively (Additional file 5). The first two components were used for further analysis and component plotting (Table 4). Inflammatory cytokines correlated strongly with an APC-driven effect (IL12p70) through Th1 (IFNγ, TNFα) combined with a Th17 (IL17) response, as well as through Th2 driven by IL12p70 in association with IL5 (hypersensitivity) and immunoglobulins on component 2. Survival correlated with the Th1/Th17 factors on PC1. On both PC1 and PC2, survival correlated positively with IgG1 and IgG3 but negatively with IGM, IgG2 and IgG4. The inverse correlation of IgG4/IgG2 with IgG1/IgG3 is consistent with immunoglobulin class-switching suggesting that antigen-specific immunoglobulin responses may have contributed to survival. It is also noteworthy that B2M and FAS receptor are negatively correlated with the immune response and survival. Collectively the data suggests an adaptive response in the DCV-treated group that included immunoglobulin class-switching and a Th17/ Th1 response. The cytotoxic Th17 response is not correlated with IL23 [68,69], a cytokine that is expected to drive naïve CD4+ cells towards Th17 lineages associated with immunosuppression. Instead, IL17 is strongly correlated with TNFα and IFNγ, consistent with conversion of an existing cognate population of Th17 cells from a tolerizing to a cytotoxicity-facilitating phenotype [34,36,37]. Closer examination of factors positively correlated on PC1 confirms a significant association of this group, KMO = 0.718 and BTS < 0.001 (Additional file 6). The analysis of the reduced set of factors identifies two components that account for 49.0% and 33.3% of the variance. Within component 1 IL17 has the highest coefficient, associated with a Th1 response driven by APC (IL12p70) ( Table 5). The second component contains the innate response associated with Th2 (IL4, TGFβ1) and other pleiotropic factors (IL7, IL8) ( Table 5). Both components were associated with IFNγ and TNFα. Thus, after three DCV-injections, the analysis suggests a multifaceted response driven by the Th1/Th17 cytotoxic (Th1-like) pathway and a Th2 immunoglobulin response. Discriminant analysis by treatment arm and survival Each treatment arm was sub-divided into three categories based on overall survival (Table 6). DA was then applied to cytokine data for each patient in each treatment arm with results as displayed in Fig. 3. Regardless of treatment DA correctly classified patients into their appropriate survival subgroup (Additional file 7). Discriminant analysis of TCV-treated patients DA based on changes of inflammatory markers among TCV-treated patients yielded a discriminant function for survival (p = 0.015, Additional file 8 that accounted for 100% of variance with 90.2% of variance in the first discriminant function (Additional file 9). DA also accurately classified the three survivor subgroups (Fig. 3a). However, none of the variables reached statistical significance (Additional file 10). Stepwise DA failed to identify any significant variables qualified for further analysis based on F function entry criteria (p < 0.05). Thus, DA of TCV patients was unable to identify a cytokine pattern to explain the survival distribution. Discriminant analysis of DCV-treated patients DA based on changes of inflammatory markers among DCV-treated patients yielded a discriminant function for survival (p = 0.020, Additional file 11) that accounted for 100% of variance with 92.9% of variance in the first discriminant function (Additional file 12). DA accurately classified the three survivor subgroups (Fig. 3b). Stepwise DA identified IL17 and TGFβ1 as the most important variables that discriminated among survival. These two variables identified two significant discriminant functions (p = 0.003 and p = 0.023, Additional file 13) that explained 100% of variance with 73.1% in the first function (Additional file 14). The discriminant function of TGFβ1 and IL17 correctly classified 70.6% of the DCVtreated patients (Fig. 3c), All coefficients were less than 0.05 for each variable and each survival group (Table 7). Thus, persistence of TGFβ1 and a decrease in IL17 was associated with relatively short survival. Persistence or increase of IL17 and an increase of TGFβ1 (consistent with an ongoing response that could be either antitumor or tolerizing depending on the TGFβ1 dominance) was associated with intermediate survival. A decrease in TGFβ1 combined with an increase of IL17 was associated with long-term survival. Immunoglobulins (Th2 pathway) PCA suggested there was an existing immunoglobulin response at baseline, but also a new immunoglobulin response in each treatment arm after three injections; therefore, this pathway was examined relative to survival. Baseline immunoglobulin values other than IgM were moderately elevated compared to healthy controls. We assumed that the vaccines were inducing a new response; so, we analyzed changes in IgM that might be expected after exposure to new antigens. The correlation between baseline IgM levels and survival is shown for TCV-treated patients in Fig. 4a, and for DCV-treated patients in Fig. 4b. The average IgM of healthy controls was 120.2 mg/dL, which is in the 40-230 mg/dL standard range of normal IgM for ages 45 years and older [70]. The changes in mean and median Ig levels after three injections of either vaccine are also shown in Table 8. The slightly increased baseline IgM in TCV-treated patients correlated with longer survival (Fig. 4a). The baseline IgM level was normal in DCV-treated patients and not correlated with survival (Fig. 4b). Linear regression analysis was confirmed by Cox's proportional hazards model for survival using baseline IgM as a cofactor in TCV-treated patients (p = 0.014, Additional file 14). The DCV regression graph shows two distinct distributions on either side of 36 months (Fig. 4b). This suggests that long-term survivors may have different mechanisms of response; therefore, we analyzed patients who survived less than or more than 60 months separately in both treatment arms (Table 9). TCV-treated patients who survived more than 60 months had elevated baseline IgM levels (p = 0.013, ANOVA) that were higher than for TCV-treated patients who survived less than 60 months (p = 0.033 independent sample t-tests, 0.034 Mann-Whitney), higher than for DCV-treated patients who survived less than 60 months (p = 0.008 independent sample t-tests, 0. Mann-Whitney), but not higher than in DCV-treated patients who survived 60+ months (p = 0.183 independent sample t-tests, p = 0.175 Mann-Whitney). Perhaps patients who survived 60+ months had an existing antitumor Th2 response that contributed to better survival even in the absence of vaccine treatment. After three injections, only DCV-treated patients who survived less than 60 months had an increase in IgM (p = 0.045 paired t-test, p = 0.06 Wilcoxon signed rank), but this did not correlate with survival (Additional file 15). T-helper cytotoxic pathways (Th1, Th17 pathways) The Th17 phenotype is induced by a combination of IL6, IL23 and TGFβ1 [31][32][33], and functionally by the amplitude of TGFβ1 [71]. Th17 cells have a dual-state plasticity; a high TGFβ1 induces regulatory or immunetolerizing effects while low TGFβ1 allows a Th1-like cytotoxic response that includes IFNγ and TNFα [34,36,37]. A strong IL12/IL23 response leading to an IL4-mediated response suggests a new T-helper (CD4) response with a new Th17 component [42]. The lack of association with IL23 suggests a helper response that does not require Th17 [42]. Positive correlation with IFNγ suggests a Th1 response; positive correlation with IL10 suggests a Th2 response; and positive correlation with TGFβ1 suggests a regulatory T lymphocyte (Treg) response. In the tumor microenvironment Th17 cells are likely antigen-cognate and can change from a suppressive to cytotoxic state in response to local factors [36,37]. The sudden increase of IL17 in association with TNFα suggests a pro-inflammatory switch of Th17 cells to a cytotoxic helper function rather than suppression. An IL23 increase could reflect lineage stimulation from naïve CD4+ cells that can be directed to both cytotoxic and regulatory pathways, depending on the local environment. Downstream effectors of Th17 can result in cytotoxic lymphocytes (Th1-like pathway) [36,37], a Th2 pathway through IgM induction [72], or suppressive (as in a Treg pathway) [32]. Because Th17 is a versatile cell type, estimations of Th17 function should include additional factors such as IFNγ, TNF, TGFβ1, and Th17-associated cytokines (IL17, IL21, IL22, IL27, IL31, IL33). All patients were included in linear regression analysis between cytokines and survival, but long-term follow up ended at 60 months; therefore, patients surviving longer were excluded from linear regression analysis of the cytokines selected as impacting survival by PCA and DA. Cytokines identified by DA accurately classified patients into survival groups. The associations between survival, IL17, IL12p70, IFNγ, and TNFα are shown in Fig. 5. The change in IL17 correlated with survival for DCV-treated patients who survived less than 60 months (Fig. 5a). Increases in IL17 were correlated with increased IL12p70, a cytokine produced by DCs (Fig. 5b). IL17 also correlated with IFNγ (Fig. 5c) and TNFα (Fig. 5d), both of which are components of Th1 responses. This suggests that the increased IL17 is associated with a Th1-like response that was triggered by antigen-loaded DC that secreted IL12. To explore the source of IL17, we investigated its association with IL23, another cytokine that can generate IL17-secreting Th17 lymphocytes [73,74]. IL17 did not correlate with IL23 (p = 0.248, ANOVA); so, the source of IL17 was not a new population of Th17 cells, but possibly resulted from conversion of an existing antigen-cognate Th17 population from a tolerizing-state to a cytotoxicity-inducing state [34,36,37]. In the TCV arm there were no post-treatment changes in cytokines that correlated with survival except for IL17, which correlated negatively (Fig. 6a). In TCV-treated patients IL17 positively correlated with TGFβ1 (Fig. 6b), and IFNγ (p = 0.038) and cytokines associated with innate immune responses: IL15 (p = 0.003), IL8 (p = 0.002) or Th2-associated cytokines: IL13 (p = 0.038), IL10 (p = 0.024), IL7 (p = 0.038), IL6 (p = 0.01) IL5 (p = 0.07) and IL2 (p = 0.018). The cytokine changes were not consistent with a cytotoxic Th1 response, but rather with conversion of an existing antigen-cognate Th17 population from a tolerizing-state to a cytotoxicity-inducing state, as would be expected after inoculation with an antigen that is recognized and presented by endogenous APCs in vivo. The association with TGFβ1 suggests that during antigen processing and presentation in vivo, TGFβ1 affected the immune response by promoting Th2 subpopulations and perhaps immunosuppressive Tregs. In DCV-treated patients DCs were antigen-loaded ex vivo in the absence of suppressive cytokines (such as TGFβ1). After subcutaneous injection, it appears that the large dose of activated DCs produced sufficient amounts of IL12 to trigger a Th1 response. The first response likely came from cross-presentation of antigens directly to cytotoxic T lymphocytes (CTL) and/or from conversion of inherently plastic antigen-cognate Th17 cells into pro-cytotoxic states. It appears this triggering signal was only provided by ex vivo loaded DC leading to increases in IL17, IFNγ, and TNFα and was not observed in TCVtreated patients. After this first wave of Th17 cytotoxic conversion, a second wave of helper population likely developed based on the positive association between IL12p70 and IL4 in both TCV-treated (ANOVA, p = 0.001) and DCVtreated patients (ANOVA, p = 0.005). This is consistent with de-novo antigen induction of a Th2 response, but in DCV-treated patients this was accompanied by increases in IFNγ (p = 0.001) and TNFα (p < 0.0001), which were not seen following TCV. The association of cytotoxic cytokines in DCV-treated patients suggests new helper cells sustained a persistent Th1-response, Nistor and Dillman J Transl Med (2020) 18:176 while in TCV-treated patients only a Th2-response persisted. Furthermore, in the DCV group there was a new Th2 response evidenced by PCA and DA of the immunoglobulins that showed a correlative increase and a class switching of IgG1 and IgG3 as opposed to IgG2, IgG4, and IgM. This was only observed in the DCV group, suggesting that while both treatment arms respond with a Th2 mechanism, the ex vivo-generated DCs may present antigens at better signal to noise ratio and thereby enhance or induce de-novo immunoglobulin responses. IL12 is a cytokine produced by APC-when they are activated by exposure to antigens [40,44]. In terms of antigen processing, the differences between the treatment arms is the source of the APCs: endogenous in situ APC in the case of TCV, and ex vivo antigen loaded DC in the case of DCV. In DCV the antigens are processed by DCs derived ex vivo from peripheral blood monocyte [9]. DCs mature and migrate to lymph nodes after exposure to antigen, where they contact effector T-cells, B-cells and natural killer cells (NKs) [40,68]. Efficient presentation of the antigen is regulated by the interaction of the MHCs with T-cell receptors (TCR), and by regulatory and costimulatory connectors as well as by cytokine and chemokine signals [69,75]. In addition to T-cell interaction, the antigen processing and presentation is modified by the state of the APCs. First there is a difference in the maturation stage of the APCs. Studies have shown that DC maturation is accompanied by a marked reorganization of endocytic compartments [76][77][78], and a concomitant inhibition of antigen uptake [79][80][81][82][83]. Antigen uptake and processing is limited to immature DCs, which contribute to the functional distinction between immature and mature DCs [82]. Endogenous APCs represent a minor population of cells at the site of TCV injections. In contrast the ex vivo loaded DCV injection contains a massive dose of 1 to 30 million immature DCs that originated from peripheral monocytes, that have not been exposed to additional maturation factors such as lipopolysaccharides. The phenotype and functionality of the ex vivo derived DCs is expected to be substantially different from the in situ APCs. As previously described, the immature DCs generated in vitro are more efficient for cross-presentation initially [83]. In addition to the massive number of simultaneously activated APCs (DCs) in DCV, the maturation differences could explain the initial Th1 cytotoxic response, cytotoxic conversion of Th17, followed by the Th2 response which could be mediated by endogenous APCs responding to apoptotic antigenloaded DC, as opposed to the predominate Th2 response associated with the smaller number of more mature and less numerous APCs induced by TCV. DCs are at the nexus of innate and adaptive immunity and have evolved to orchestrate a multi-pronged immune response [84]. Regardless of antigen source, DC are able to present antigen by both MHC I and MHC II pathways to induce both Th1 and Th2 immune responses [83][84][85][86][87][88][89], both of which are necessary for an optimal immune response to tumor antigen [90]. DC also can induce Th17 responses [91]. The role of Th17 cells in cancer is of increasing interest, especially because Th17 cells exhibit a plasticity that can result in their differentiation into Treg or Th17/Th1 cells [37,92,93]. The latter increasingly lose the ability to secrete IL17, but are able to secrete larger quantities of TNF, IL2, GM-CSF, and IFNγ than classical Th1 cells. It appears that TGFβ drives Th17 cells to become Tregs, and the absence of TGFβ and the presence of cytokines such as IL12 and IL23 is required for conversion to the Th17/Th1 phenotype. Although Th17 cells and Th17/Th1 cells are not believed to be cytotoxic, this Th1 helper phenotype is associated with increased cytotoxic T lymphocytes (CTL) in the tumor microenvironment. The ability of antigen-loaded DC to induce differentiation of Th17 cells into Th1 helper cells associated with anti-tumor effects was demonstrated in animal models [94]. Our analysis suggests that the patient-specific DCVs induced similar immune responses. Innate pathways Pattern-recognition receptors (PRR) are proteins expressed by immune cells that recognize pathogenassociated molecular patterns as danger signals [95,96]. IL6, IL8, and TNFα are produced in response to PRR that trigger a response from NK cells that is associated with production of IL15, IL18, and IFNγ [14][15][16][17]. Cytokines that regulate innate immunity are produced primarily by mononuclear phagocytes such as macrophages and DCs, although they can also be produced by T-lymphocytes, natural killer (NK) cells, endothelial cells, and mucosal epithelial cells. PRRs In injured tissue the innate immune system down-regulates effector mechanisms and restores homoeostasis via cytokines such as IL10 and TGFβ1 that are released by macrophages, preferentially the M2 subset, which can induce Tregs, inhibit pro-inflammatory cytokine production, and induce tissue healing by regulating extracellular matrix protein deposition and angiogenesis. Discussion This study provides additional insight into differences in immune responses elicited by DCV and TCV. Both vaccines presented autologous tumor antigens but were associated with different immune responses and different survival benefit. While the results in the study are purely correlative, they are suggestive of underlying immunologic mechanisms of action. The major finding of this analysis is that DCV was associated with a multipronged immune response that included innate, Th2, and Th1/ Th17 responses while the TCV immune responses were limited to innate and Th2. Direct correlation between Th1/Th17 changes and survival was also demonstrated. The results provide additional evidence that for therapeutic cancer vaccines it may be advantageous to present antigens by DC that were loaded with antigen ex vivo. The analysis presented herein is complex and involved correlations of multiple variables derived from relatively small sample sizes. In addition to the analysis described in the manuscript, IBM SPSS (build 1.0.0.1298) was used to create a standard model using Automatic Linear Modeling function, targeting survival as an independent variable. A forward stepwise model was selected and AICC criteria, or F statistics (include effects with p < 0.05, remove effects at p > 0.1) was used for entry/removal of variables. Although the results were significant for many predictors, we felt that the reliability of the test might be criticized because of (small sample and large variability. Therefore, we performed a variable reduction with PCA and DA (variables normality verified) based on Bayesian estimation. Later, the synthetical variables obtained by PCA reduction of various groups of variables was examined in a regression model, but that approach did not provide meaningful results. The PCA graphics of the main components were presented herein without a statistical conclusion because representation of the differences between groups was provided with more confidence in the DA results. Nonetheless, these observations are hypothesis generating, or suggestive, clearly more cases would be needed for a robust statistical analysis. With regard to immunoglobulins, there was evidence of a Th2 response that included increases in IgM in both treatment arms, especially in the DCV arm. The Th2 response was presumably mediated by endogenous, in situ DCs in the TCV-arm, while the ex vivo antigenloaded DCV likely caused an early cytokine-mediated or cross-presentation response, followed by a new presentation of antigens through a typical helper pathway. In the TCV arm the changes are presumably in response to the additional antigenic stimulation provided by the irradiated tumors cells while in the DCV arm we believe that the initial cross presentation by DCV induced a new Th2 response as evidenced by immunoglobulin class switching. An association with a Th1 T-cell response is evidenced by the increase in IFNγ and TNFα with IL-12 only in the DCV arm, but this was not evident in the TCV arm. Just prior to each subcutaneous injection, GM-CSF was admixed with tumor cells for TCV and dendritic cells for DCV for its adjuvant effects [97,98], and specific effects on dendritic cells [99]. GM-CSF was given in the same dose and schedule in both arms; therefore, in the absence of a GM-CSF alone control arm, we cannot identify the specific effects that GM-CSF induced in each arm. GM-CSF was not one of the cytokines measured in the analysis; so, we have not data re its association with other cytokines in the principal component analyses. We know that levels of granulocyte colony stimulating factor (G-CSF) and macrophage colony stimulating factor (M-CSF) did not change after three injections of either TCV or DCV. Thymus and activation regulated chemokine CCL17 (TARC), which is induced by GM-CSF [100], was elevated significantly and similarly in both arms, and therefore excluded from the analysis. The clinical trial from which this data was derived was the first randomized study testing therapeutic cancer vaccines in which there was a difference in survival in the treatment arms and for which associated proteomic data has been analyzed extensively for changes in, and correlations, with circulating markers [9]. Trying to decipher immune responses and their relation to clinical outcome in vaccine clinical trials is challenging. In one study in which colorectal cancer patients were treated with autologous dendritic cells loaded with allogeneic tumor cell lysate, plasma and serum samples were collected prior to vaccination and continuously during treatment [101]. Patients classified as having stable disease had increasing levels of IL2, IL5, TNFα, IFNγ, and GM-CSF while increases in carcinoembryonic antigen (CEA) and (TIMP-1) levels were associated with progressive disease. No correlative changes were noted for IL1b, IL4, IL6, IL8, IL10, IL12, macrophage inflammatory protein 1beta (MIP-1β), Interferon-inducible protein 10 (IP-10), or Eotaxin. That study was limited by the lack of a control arm and the limited number of cytokines examined. In another study, immune monitoring was conducted in association with an 815-patient six-arm trial that randomized patients with surgically resected stage 3 and 4 melanoma to peptide vaccines or placebo with GM-CSF or placebo in patients of appropriate HLA-type, and GM-CSF or placebo in patients who were HLA-A2 negative [102,103]. One challenge for correlative analyses was that none of the treatment variables impacted overall survival compared to placebo [36]. The focus of the immune analysis was primarily on changes in immune cell phenotypes and their recognition of injected antigens rather than on changes in cytokines. There were no vaccine-specific correlations identified, and the cellular and humoral responses did not correlate with survival in the manner predicted [103]. The major strengths of our study include: (1) the use of data and samples from a randomized clinical trial that tested ATA presentation by two different cell sources (dendritic cells and cancer cells), (2) the availability of paired blood samples obtained at baseline and after 3 weekly injections, (3) the treatments tested were associated with different survivals, enabling a direct correlation between treatment, immune response, and survival, (4) the large number of immune markers tested, (5) the longterm follow-up for correlations with survival, and (6) the power of the statistical tools used to group positively and negatively correlated variables. The limitations of the study include: (1) the relatively small sample size, (2) paired samples were not available for 7.0% of the patients, (3) samples were not available for testing at earlier and later time points other than week-0 and week-4. Conclusions DCV induced a more effective immune response than that induced by TCV, and these immune responses were associated with improved survival. DCV was associated with innate, Th1/Th17, and Th2 responses while TCV was only associated with innate and Th2 responses. Figure 7 is an infographic summary of these changes.
2020-04-21T14:33:09.540Z
2020-04-21T00:00:00.000
{ "year": 2020, "sha1": "44f5885dd2330b601ed7039c94eeadd3b0271168", "oa_license": "CCBY", "oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-020-02328-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d9d27ae8491edfb2351e157ea7f647d539bc60c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
812729
pes2o/s2orc
v3-fos-license
Stem loop-mediated isothermal amplification test: comparative analysis with classical LAMP and PCR in detection of Entamoeba histolytica in Kenya Background Entamoeba histolytica, the causative agent for amoebiasis is a considerable burden to population in the developing countries where it accounts for over 50 million infections. The tools for detection of amoebiasis are inadequate and diagnosis relies on microscopy which means a significant percent of cases remain undiagnosed. Moreover, tests formats that can be rapidly applied in rural endemic areas are not available. Methods In this study, a loop-mediated isothermal test (LAMP) based on 18S small subunit ribosomal RNA gene was designed with extra reaction accelerating primers (stem primers) and compared with the published LAMP and PCR tests in detection of E. histolytica DNA in clinical samples. Results The stem LAMP test indicated shorter time to results by an average 11 min and analytical sensitivity of 10−7 (~30 pg/ml) compared to the standard LAMP and PCR which showed sensitivities levels of 10−5 (~3 ng/ml) and 10−4 (~30 ng/ml) respectively using tenfold serial dilution of DNA. In the analysis of clinical specimens positive for Entamoeba spp. trophozoites and cysts using microscopy, the stem LAMP test detected E. histolytica DNA in 36/126, standard LAMP test 20/126 and PCR 17/126 cases respectively. There was 100% agreement in detection of the stem LAMP test product using fluorescence of SYTO-9 dye in real time machine, through addition of 1/10 dilution of SYBR® Green I and electrophoresis in 2% agarose gel stained with ethidium bromide. Conclusion The stem LAMP test developed in this study indicates potential towards detection of E. histolytica. Background Amoebiasis caused by protozoan Entamoeba histolytica is an important human gastrointestinal infection responsible for over 50 million amoebic infection cases with over 100,000 deaths annually [1]. It is a leading cause of death only surpassed by malaria and schistosomiasis [2] with most of these cases being reported in the developing countries [3][4][5]. In Africa, the burden of amoebiasis is high with an estimated E. histolytica infection median rate of 796 per 100,000 people [6]. Studies conducted in Kenya indicated prevalence of 6-11% of E. histolytica/Entamoeba dispar in children at selected hospitals [4,5,7] and 11-32% among adults [8]. Moreover, in more recent studies, the prevalence of E. histolytica by qPCR was recorded at 15% in Bungoma County, Western Kenya [9] while a much lower prevalence of 0.4% was reported among children with vertically transmitted HIV infection [10]. This data suggests that amoebiasis is a heavy burden among Kenyan population which is also plagued by other diseases such as malaria, HIV-AIDS, tuberculosis and other non-communicable diseases. The E. histolytica infection causes several intestinal and extra-intestinal conditions with dysentery and liver abscess being the most common [11]. The algorithm to diagnose amoebiasis is often complex due to the unsatisfactory sensitivity and specificity of available tests. Microscopy is widely used but has low sensitivity and cannot differentiate E. histolytica from the morphologically similar non-pathogenic species E. dispar and amphizoic E. moshkovskii [12]. Stool culture followed by isoenzyme analysis has been used to differentiate species but the methods are time consuming hence impractical for use in the routine diagnosis. Antibody detection tests have been developed and used widely but their downturn is low sensitivity in early disease and inability to distinguish active infection from previous exposure [13]. The E. histolytica antigen detection in the stool using ELISA tests [14][15][16] has proved more sensitive than microscopy, however cross-reactivity with E. dispar limit their application [17,18]. This far the PCR method has been the most sensitive method for discriminating between E. histolytica and E. dispar. Indeed, several PCR tests have been developed [19][20][21] but despite the reported advantage of PCR tests in diagnosis of E. histolytica, the method has limited use in routine diagnosis of amoebiasis in Kenya due to associated cost. In the last decade, a rapid DNA amplification test called loop-mediated isothermal amplification (LAMP) of DNA was developed [22]. The technique is a novel strategy for gene amplification which relies on DNA polymerase with strand displacement activities. The LAMP technique has recently been applied in detection of human diseases such as malaria [23,24], human toxoplasmosis [25] and meningitis [26] and has been hypothesized to revolutionize field based molecular test [27,28]. The LAMP is well suited for E. histolytica diagnosis in endemic areas because it does not require expensive equipment to achieve amplifications, sensitivity is equivalent to that of PCR and time to results is approximately 1 h. Moreover, the large amount of products formed offers the use of different visual detection formats that are applicable in rural endemic areas. Indeed, LAMP has recently been used successfully to detect other human stool pathogens such as Ascaris lumbricoides [29], Clostridium difficile [30] and hookworms [31]. Previously, LAMP tests for E. histolytica have been developed based on small subunit rRNA gene [32] and HLY6 gene [33]. In 18S rRNA gene, the nested PCR detected 0.1-1 parasite per reaction compared to LAMP test which detected 1 parasite [32] and 2 ng/µl compared to 15.8 ng/µl (~5 parasites per reaction) for LAMP test using DNA for HLY6 target respectively [33]. LAMP uses many reaction components which is a major cost in the developing countries [34]. However, several companies have come up with ready to use commercial isothermal master mixes slightly reducing the cost burden and need for protracted optimization procedures. These include Optigene, UK (http:// www.optigene.co.uk/products-reagents/) and EIKEN Chemical Co Ltd, Japan (http://www.eiken.co.jp/en/). The advantages presented by LAMP method as a potential point of use test calls for more attention in improving this platform for use in endemic countries. On this context, [35] reported improved amplification speed and sensitivity of C. difficile, Listeria monocytogenes and HIV LAMP tests through addition of a second reaction accelerating primers called stem primers (target the stem section of the LAMP amplicon). In addition, stem primers have been used recently to improve the sensitivity of Trypanosoma brucei gambiense LAMP test by ~100-fold compared to the standard LAMP test [36]. The advantage of stem primers is that they can be used in multiplex with loop primers [37] without affecting test reproducibility. In this work, we report an improved LAMP test for E. histolytica with inclusion of stem primers. Reference DNA The reference DNA sample of E. histolytica HM-1: IMSS was kindly provided by Dr. Graham Clark, Department of Pathogen Molecular Biology, London School of Hygiene and Tropical Medicine, UK. The DNA to check the test specificity was prepared from E. dispar and Giardia lamblia using commercial DNA extraction kit (Qiagen, Essex, UK). Clinical samples All samples were collected from children who presented to three participating outpatient clinics and those admitted to the paediatric ward of Mbagathi District hospital, Nairobi were examined for the presence of E. histolytica. In order to improve sensitivity of microscopy in detection of E. histolytica cysts, the technique of formal-ether concentration was applied [38]. DNA extraction The DNA was prepared from 126 samples scored as positive for Entamoeba (E. histolytica, E. dispar and E. moshkovskii complex) using microscopy. Genomic DNA was extracted using QiAmp ® DNA stool Mini kit (Qiagen, Crawley, West Sussex, United Kingdom) as per the manufacturer's instructions with slight modifications. Briefly, 200 μl of fecal suspension was washed five times with distilled water. To this suspension, 1.4 ml of ASL buffer was added and subjected to five times thawing (80 °C) and freezing (−80 °C) to rupture the rigid cysts. The genomic DNA was eluted in 50 μl of nuclease-free water and stored at −20 °C until use. PCR test The PCR test targeting the small-subunit rRNA gene was used [20] with some modifications. Briefly a 25 µl test was done and consisted of 1× PCR buffer, 1.5 mM MgCl 2 , 2 mM dNTPs, 0.5 U of Taq polymerase and 10 pmol of forward primer (EntaF) and reverse primer (EhR). These primers generate a 166-bp PCR product and are specific for E. histolytica. The reference DNA template was 2 µl of DNA and 3-4 μl for clinical samples. The amplifications were done in a PCR system 9700 thermal cycler (Applied Biosystems, UK) under the following cycling conditions: An initial denaturation step at 94 °C for 3 min, followed by 35 cycles each consisting denaturation at 94 °C for 1 min, annealing at 58 °C for 1 min and extension at 72 °C for 1 min. The final extension was at 72 °C for 7 min. Reactions were done in duplicates and the resulting amplification products were separated by electrophoresis in 2.0% agarose gel in 1 × Tris-borate-EDTA at 100 V for 45 min and visualized under UV light after staining with ethidium bromide. Design of LAMP primers Four sets of primers each recognizing ten distinct sections of E. histolytica 18S small subunit ribosomal RNA (18S rRNA gene) (Genbank accession number X64142) and hemolysin (HLY6) gene (GenBank accession number Z29969.1) were designed using Primer Explorer version 3 software (http://primerexplorer.jp/lamp3.0.0/index. html). The targets were chosen due to the reported specificity and high number of copies (~200 copies) for18S rRNA gene [39] and HLY6 (400 copies/cell) [33]. The software designed the following primers: forward and backward outer primers (F3 and B3) and forward and backward inner primers (FIP and BIP). The loop forward and backward primers (LF and LB) and stem forward and backwards primers (SF and SB) were manually designed following the respective published primer characteristics [22,35]. The primers were blasted for target specificity using the basic local alignment search tool (http://www. ncbi.nlm.nih.gov/BLAST). The designed tests consisted of F3/B3, FIP/BIP, LF/LB and SF/SB primer combination. LAMP reactions The 18S and HLY6 LAMP primers were first analyzed for detection of the reference E. histolytica HM-1: IMSS using standard LAMP conditions. The tests specificity was checked with closely associated pathogen DNA extracted from morphologically similar but non-pathogenic E. dispar and G. lamblia. The primer set(s) that passed these criteria were then analyzed using a tenfold serial dilution of control DNA and using the standard LAMP test conditions [22]. The most sensitive primer set for each target was selected for further analysis. The new tests were labeled stem 18S and Stem HLY6 LAMP tests respectively (Table 1) and the selected primer sets were used to optimize respective LAMP test using Taguchi method [40]. Briefly, four reaction components determined to have the greatest effect on LAMP reaction namely inner primers, loop primers, stem primers and dNTPs had their concentrations varied at three levels. The inner primer concentration was varied from 30 to 60 pmol, loop primers from 10 to 30 pmol, stem primers from 10 to 40 pmol and dNTPs from 1 to 3 mM respectively. The concentrations of each reaction component were arranged in an orthogonal array [40] and used to determine the amount of amplification product formed [40]. This was followed by regression analysis to determine the concentration optima for each selected reaction component [40]. Other reaction components included 1× ThermoPol reaction buffer contained 20 mM Tris-HCl (pH8.8), 10 mM KCl, 10 mM (NH 4 ) 2 SO 4 , 2 mM MgSO 4 and 0.1% Triton X-100. The Bst 3.0 DNA polymerase (New England Biolabs, MA USA) was 0.5 µl, betaine at 0.8 M and SYTO-9 fluorescence dye at 2.0 µM (Molecular Probes, Oregon, USA). The template was 2 µl of DNA. The LAMP reaction were performed for 60 min at 62 °C using the real-time PCR machine and data acquired on FAM channel followed by reaction inactivation at 80 °C for 5 min. Once the optimized reaction conditions were determined the reactions were duplicated using a thermocycler and a water bath that maintained temperature at ~61-63 °C. The template for clinical samples was varied from 2 to 4 µl. For comparative purposes, the published LAMP test based on small subunit rRNA gene [32] and HLY6 LAMP test [33] were included. Detection and confirmation of LAMP product The LAMP product was detected through fluorescence of SYTO-9 dye in real time PCR machine, through electrophoresis in 2% agarose gel stained with ethidium bromide and after addition of 1 µl of 1/10 dilution of 10,000× [20]. E. histolytica LAMP optimum reaction conditions The Taguchi method determined the optimal concentrations for the four reaction components in stem 18S LAMP test as 35 pmol for FIP/BIP, 18 pmol for loop primers, 23 pmol for stem primers and 2 mM dNTPs. The stem HLY6 LAMP test showed the most efficient reaction at 40 pmol for FIP/BIP, loop primers at 20 pmol, stem primers at 15 pmol and 1.5 mM dNTPs. Concentrations for other reagents were as reported previously [22]. The optimum temperature for stem LAMP test was determined at 62 °C and 50 min being the reaction cut-off point. Stem18S LAMP test indicated superior sensitivity to stem HLY6 LAMP test hence the latter was not progressed in the analysis of clinical samples. E. histolytica LAMP product The optimized E. histolytica stem 18S LAMP tests with and without outer primers indicated similar exponential real time amplification curves (Fig. 1a) with post amplification melting temperature (T m ) of ~86 °C (Fig. 1b). The LAMP products showed the ladder like pattern on the agarose gel indicating the formation of stem-loop with inverted repeats (Fig. 2b). On addition of SYBR ® Green I, the positive product turned green and the negative ones remained orange (Fig. 2c). The DdeI restriction enzyme digestion of stem 18S LAMP test product indicated the predicted amplicons of 143 and 103 bp. Analytical sensitivity of LAMP and PCR tests The stem 18S LAMP tests (with and without outer primers) indicated identical detection limit of 10 −7 (30 pg/ ml) (Fig. 1a; Table 2) while the standard LAMP test (with loop primers) and published LAMP test (without loop primers) indicated detection levels ranging from 30 to 300 pg/ml ( Table 2). The standard LAMP test (without loop primers) showed low sensitivity and was not include in further analysis ( Table 2). The stem 18S LAMP test sensitivity was not altered when the stem primers were used either in their forward or reverse orientation and/or when the template was increased from 2 to 4 µl. The PCR test based on the same target showed detection limit of 10 −5 (3 ng/ml) (Fig. 1). The stem 18S LAMP test sensitivity was reproducible using thermocycler and water bath and no cross reactivity was recorded with non-target DNA. The optimized E. histolytica stem 18S LAMP test with and without outer primers F3 and B3 showed reduction in reaction time (cycle threshold = C T ) value of ~11 cycles ( Table 2) compared to the standard LAMP test targeting the same gene. Results for clinical samples The stem 18S LAMP tests with and without outer primers detected 36 (28.6%) while the standard and published LAMP tests detected 26 (20.6%) and 21 (16.7%) of E. histolytica DNA from samples scored as Entamoeba spp. using microscopy respectively (Table 3). We recorded intermittent non-specific products with some replicates for stem LAMP test with outer primers, in which case the replicates were repeated. The conventional PCR classified 18 (14.3%) as E. histolytica. Other LAMP tests formats were not used in sample analysis since they indicated inferior analytical sensitivity. Discussion In the present study we have designed a rapid and visual LAMP assay for detection of E. histolytica. The stem18S LAMP test is a modification of the standard LAMP test through inclusion of stem primers and indicate superior analytical sensitivity and shorter reaction time to results and translate to a higher detection of pathogen DNA in clinical samples compared to the standard LAMP format (Tables 2, 3). The recorded superior sensitivity can be attributed to the multiplexing of two reaction accelerating primers (loop and stem primers) in a single reaction as compared to the standard LAMP format with and/or without loop primers. The loop primers accelerate the reaction by priming the sequence loops between FIP/BIP primers [37] while the stem primers accelerate reaction by targeting the stem section of the sequence [35]. It is therefore the use of two reaction accelerating primers that exponentially increase the amount of LAMP product, hence reduction in reaction time and increase in sensitivity. Surprisingly the omission of outer primers did not affect the stem 18S LAMP test sensitivity, although the ladder like bands on agarose gel were less bright compared with the format with the outer primers. This may indicate formation of less product in the latter format but did not translate to less sensitivity in terms of pathogen DNA detection. Indeed, the products of the two LAMP formats were confirmed to be identical through acquisition of post amplification melt curves ( Fig. 1) and through digestion of the product with restriction enzyme. The primary role of the outer primers is to displace the newly synthesized strands into a single strand making it available for extension by either inner primer [22] and do not form part of the final LAMP product. It appears that the remaining primers may have some strand displacement activity, although not as efficient as the outer primers. The possibility of omitting the outer primers gives more flexibility for positioning of the remaining primers [35]. It is not clear as to why the LAMP test based on the HLY6 gene showed low sensitivity (10 −2 ) and low detection of PCR positive samples despite the reported higher number of copies (~400 copies) [33]. One possibility is that the reference DNA and the Kenyan samples may have mutation on the HLY6 gene or on the sequence section targeted by the published primers hence poor priming. Sequencing of the HLY6 gene from Kenyan isolates may answer this question in future. The lower sensitivity of the published LAMP format [32] compared to stem LAMP format is attributable to absence of loop primers. Indeed, our identical LAMP format based on the same gene showed similar lower detection levels with the published format (Table 2). On addition of loop primers, this LAMP format analytical sensitivity improves by tenfold and translate to detection of more positive clinical samples ( Table 3). The use of loop primers to accelerate LAMP tests is recommended [37] and has been demonstrated to significantly improve LAMP tests sensitivity and detection of pathogen DNA in clinical samples [42,43]. The sensitivity of E. histolytica LAMP test is further improved in this study through multiplexing loop primers with stem primers. This sequential addition of primers resulting in improvement of LAMP test sensitivity is PCR a n/a 18 (14.3%) Hamzah et al. [20] unequivocal demonstration that the reaction accelerating primers are critical to any successful LAMP test. The resulting product was easily detected using SYBR ® Green I dye allowing visual inspection of results. The SYBR ® Green I is cheap but the need to open the tube to add the dye risk contamination with amplicon. Further the dye is non-specific and binds to any double stranded DNA including primer-dimers. To increase the confidence of using non-specific dyes, rigorous test optimization is necessary to reduce formation of spurious products. In addition, the use of more negative controls is recommended to the increase the confidence limit. The stem LAMP test classified 36 (28.6%) of 126 DNA samples as E. histolytica. More encouraging results were that all PCR positive samples were also positive with stem 18S LAMP test, indicating that both tests were detecting the same thing. In this study, the detection rate of E. histolytica was at 14.4% using PCR and is equivalent to that reported earlier of 13.3% [20]. All LAMP formats showed detection range of 15.9-28.6% which indicates LAMP method is superior to classical PCR and is a good improvement towards diagnosis of amoebiasis. Similar superior sensitivity of stem LAMP format to PCR has been recorded in diagnosis of sleeping sickness [36]. This is the first study in Kenya to report the detection of E. histolytica using LAMP method. It is possible that the prevalence of E. histolytica is even higher since a large portion of samples remained un-identified and/or that the microscopically observed cysts belong to the morphologically similar but non-pathogenic E. dispar and E. moshkovskii. No tests were done to check the presence of E. dispar and E. moshkovskii. The world prevalence of E. dispar is reported to be nine times that of E. histolytica [2]. If that phenomenon holds in the prevalence of this species in Kenya, then a large portion of the remaining 90 (71.4%) DNA could be E. dispar. Having methods that can accurately differentiate Entamoeba spp. will help estimate their prevalence in Kenya and avoid unnecessary chemotherapy in patients with non-pathogenic species. It should be noted that in amoebiasis, the reason to treat is based on demonstration of trophozoites and/or cysts in the stool, as such LAMP test may not be relied upon to make a treatment decision. Since LAMP test is faster to perform, the technique could form part of diagnostic algorithms for amoebiasis where LAMP test is used to select cases for further confirmation with PCR. Conclusions In this study: i. A new stem 18S LAMP test which is a modification of the standard LAMP test through inclusion of stem primers was developed. ii. The stem 18S LAMP test recorded superior sensitivity and shorter reaction time to results. iii. The detection rate of E. histolytica using the new test was higher than prevalence recorded earlier. It is therefore recommended that this new stem 18S LAMP test be part of diagnostic algorithms for amoebiasis.
2017-08-03T02:30:45.756Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "13c8aaaa02314e693106607159e6d0868cbd5f5b", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-017-2466-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "924d4a04282258636e2a8bf4138db5b076e60daf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9116344
pes2o/s2orc
v3-fos-license
pPCV, a versatile vector for cloning PCR products The efficiency of PCR product cloning depends on the nature of the DNA polymerase employed because amplicons may have blunt-ends or 3′ adenosines overhangs. Therefore, for amplicon cloning, available commercial vectors are either blunt-ended or have a single 3′ overhanging thymidine. The aim of this work was to offer in a single vector the ability to clone both types of PCR products. For that purpose, a minimal polylinker was designed to include restriction sites for EcoRV and XcmI which enable direct cloning of amplicons bearing blunt-ends or A-overhangs, respectively, still offering blue/white selection. When tested, the resulting vector, pPCV, presented high efficiency cloning of both types of amplicons. Introduction The in vitro amplification of DNA fragments by polymerase chain reaction (PCR) is a routine technique in most molecular biology laboratories. Direct cloning of DNA fragments amplified by Taq DNA polymerase has frequently been found to be inefficient [Harrison et al. 1994] since this enzyme tends to add a non-templated nucleotide to the 3′ ends of the amplicon, mostly an adenosine residue, leaving a 3′overhang [Clark 1988]. To circumvent this limitation, some commercially available vectors were constructed in order to have a 3′-T overhang (T-vectors) for sticky-end cloning. Many strategies have been developed to add a 3′-T overhang. One approach involves tailing a blunt-ended vector using terminal transferase in the presence of dideoxythymidine triphosphate (ddTTP) [Holton & Graham 1991] but there is a high probability that some vector molecules will lack an overhang at one or both ends. These incomplete plasmids can circularize during ligation rendering ineffective for cloning [Jun et al. 2010]. Another approach is to digest a parental vector with a restriction enzyme that will generate single 3′-T overhangs. Restriction enzymes used for that purpose include BciVI, BfiI, HphI, MnlI, TaaI, XcmI and Eam1105I [Jun et al. 2010;Dimov 2012;Gu & Ye 2011;Borovkov & Rivkin 1997]. However, these vectors are not recommended for cloning amplicons produced by DNA polymerases which generate blunt-ended products. The aim of this work was to construct a vector based on pBlueScript® II KS with a modified polylinker which would allow direct cloning of PCR products bearing either blunt-ends or A-overhangs. Construction of T-vector The stuffer DNA used in this work was derived from a fragment of the S. cerevisiae URA3 gene present in plasmid pNKY51 [Alani et al. 1987] and was obtained by PCR using the following primers: PXCM-1 (5′-AAGGTACCGATAT CTCCAATACTTGTATGGAGGGCACAGTTAAGCC-3′) and PXCM2 (5′-AAGAGCTCGATATCCTCCAATACTC CTTTGGATCCCTTCCCTTTGCAAATAGT-3′). Primer PXCM-1 contains restriction sites for SacI, EcoRV and XcmI while PXCM-2 has sites for KpnI, EcoRV and XcmI (all sites are underlined). Both primers have sequences complementary to URA3 which allow amplification of a~600 pb stuffer DNA fragment. PCR was carried out in a volume of 50 μL containing 1.5 ng pNKY51, 0.2 mM dNTP, 0.2 μM each primer, 1× PCR buffer (100 mM Tris-HCl [pH 8.5], 500 mM KCl), 2 mM MgCl 2 and 2 U Taq polymerase (LCG Biotechnology). Amplification was performed for 30 cycles of 94°C/45 s, 65°C/45 s, 72°C/40 s after an initial denaturation step of 94°C/45 s. A final extension step was performed for 2 min/72°C. The resulting amplicon was purified with UltraClean PCR Clean-Up Kit (MO BIO) and digested with SacI and KpnI following ligation to pBlueScript® II KS digested with the same enzymes. Results and discussion For vector construction, a minimal polylinker was designed ( Figure 1A) with the inclusion of restriction sites for XcmI, which produce 3′-T overhangs that can be used for cloning PCR products derived from amplification by Taq polymerase, and EcoRV, which yields blunt-ends suitable for cloning PCR products generated by Pfu DNA polymerases. It is argued that the use of XcmI is limited because vectors incubated with this enzyme are often partially digested leading to a high background of non-recombinant transformants [Xuejun et al. 2002]. This issue was solved by the insertion of a stuffer DNA sequence large enough to be easily separated by gel electrophoresis [Gu & Ye 2011;Jo & Jo 2001]. The new polylinker still allows blue/white selection because the lacZα reading frame is reestablished upon religation of the vector after removal of the stuffer DNA ( Figure 1A). When vectors digested with EcoRV are religated the lacZα reading frame is restored thus rendering the cells blue, whereas vectors digested with XcmI can only yield blue colonies if both T-overhangs are lost prior to religation. For stuffer DNA, a fragment of the yeast URA3 gene was amplified containing EcoRV and XcmI sites for amplicon ligation and SacI and KpnI for cloning into pBlueScript® II KS digested with the same enzymes ( Figure 1A). A selected clone was digested with different enzymes to confirm the presence of the stuffer DNA: EcoRV (558 bp), SacI + KpnI (570 bp), XcmI (534 bp) ( Figure 1B). The resulting vector was named pPCV ( Figure 1C). This vector was digested either with XcmI or EcoRV and the~2.9 kb versions of the linearized vectors were named pPCV-T and pPCV-B, respectively ( Figure 1C). To test the efficiency of the resulting vectors, a yeast LEU2 gene fragment was amplified by using Phusion or Taq polymerase and the resulting amplicons (~1.4 kb) were ligated into pPCV-B and pPCV-T, respectively. The results of bacterial transformation are presented on Table 1 and the presence of inserts was assessed by PCR using primers 5-leud and 3-leud ( Figure 2). The low percentage of white colonies observed when the pPCV-B system was used is explained by the fact that ligation of blunt-ended molecules is generally more difficult than sticky-ends. Nonetheless, a high percentage (83.3%) of white colonies had inserts. As for the pPCV-T system, most of the white colonies (90.0%) observed had inserts. All other false positives can be explained by the loss of one T-overhang following religation, which results in the loss of original lacZα reading frame as has been previously observed [Arashi-Heese et al. 1999]. The results shown in this work show that pPCV can be successfully used for high efficiency cloning of amplicons. It provides in the same cloning platform two important advantages: i) the ability to clone PCR products derived from different DNA polymerases still allowing blue/white selection and, ii) its minimal polylinker prevents undesirable restriction sites at the ends of cloned amplicon after subcloning. Plasmid pPCV is available upon request.
2017-06-26T02:20:35.866Z
2013-09-05T00:00:00.000
{ "year": 2013, "sha1": "6495fd4876568cd999d66798e2579a0254e5f084", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-2-441", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb756c0f6001cbd1d1c5598045a5e48d063a7394", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5823094
pes2o/s2orc
v3-fos-license
T cells cooperate with palmitic acid in induction of beta cell apoptosis Background Diabetes is characterized by progressive failure of insulin producing beta cells. It is well known that both saturated fatty acids and various products of immune cells can contribute to the reduction of beta cell viability and functionality during diabetes pathogenesis. However, their joint action on beta cells has not been investigated, so far. Therefore, we explored the possibility that leukocytes and saturated fatty acids cooperate in beta cell destruction. Results Rat pancreatic islets or insulinoma cells (RIN) were co-cultivated with concanavalin A (ConA)-stimulated rat lymph node cells (LNC), or they were treated with cell-free supernatants (Sn) obtained from ConA-stimulated spleen cells or from activated CD3+ cells, in the absence or presence of palmitic acid (PA). ConA-stimulated LNC or Sn and PA cooperated in inducing caspase-3-dependent RIN cell apoptosis. The observed effect of PA and Sn on RIN cell viability was mediated by p38 mitogen-activated protein kinase (MAPK)-signaling and was achieved through auto-destructive nitric oxide (NO) production. The cooperative effect of Sn was mimicked with the combination of interleukin-1β, interleukin-2, interleukin-6, interleukin-17, interferon-γ and tumor necrosis factor-α. Conclusion These results imply that stimulated T cells produce cytokines that cooperate with saturated free fatty acids in beta cell destruction during diabetes pathogenesis. Background Diabetes mellitus is a common name for a group of metabolic diseases characterized by hyperglycemia resulting from defects in insulin secretion and/or action. Hyperglycemia and other related metabolic disturbances can lead to serious damage to various systems of the body, especially nerves and blood vessels. Statistical data show that the excess global mortality attributable to diabetes in the year 2000 was 5.2% of all deaths [1], while its prevalence for all age-groups worldwide was expected to increase from 2.8% in 2000 to 4.4% in 2030 [2]. The most frequent forms of diabetes mellitus are type 2 diabetes (T2D) and type 1 diabetes (T1D). T2D is classically considered to be a metabolic disease characterized by insulin resistance and additional disorders, such as pancreatic beta cell dysfunction and decrease in beta cell mass, which intensively contribute to the disease course [3,4]. One major reason for insulin resistance in T2D is obesity [4,5], which is associated with high glucose and free fatty acid levels circulating throughout the body affecting various cell types including pancreatic beta cells [6]. It is well known that hyperlipidemia contributes to T2D pathogenesis [4,5,7], and that high concentrations of saturated fatty acids, including palmitic acid (PA), lead to impairment of insulin action and beta-cell dysfunction [6,8,9]. Increased adiposity enhances adipose tissue secretion of cytokines, such as TNF-α, IL-1β and IL-6 which are cytotoxic to beta cells, as well as production of adipokines, including leptin, resistin, and adiponectin, which have a cytoprotective role in cytokine and free fatty acidinduced beta cell destruction [10][11][12]. Obesity and insulin resistance have been frequently associated with a state of low-grade inflammation and therefore it is assumed that inflammation contributes in a major way to the development of T2D [4,13,14]. An accumulating set of data supporting this assumption includes findings of inflammation-related changes, such as amyloid deposits and fibrosis, increased beta cell death and the presence of leukocyte infiltrates in the pancreata of T2D patients and animals [4,15,16]. Additionally, it is becoming increasingly clear that obesity is also an important predisposing factor for T1D [4,5], a disease with proposed autoimmune etiopathology, in which immune cells infiltrate the endocrine pancreas and cause beta cell destruction [17]. The means by which macrophages and T cells exert their cytotoxic actions upon beta cells are rather well established and comprise various soluble mediators, such as oxygen free radicals, nitric oxide (NO) and cytokines, including IL-1β, IL-6, IFN-γ and TNF-α, but the primal target of the autoimmune attack has not yet been defined [17]. It has been suggested that physiological beta cell turnover may expose specific tissue antigens that are carried to the draining lymph nodes by infiltrating macrophages [17]. Importantly, this initial turnover may be a consequence of obesity, i.e. hyperlipidemia or the excess production of cytokines in adipose tissues [4]. Thus, in the present paper we investigated the interaction between one of the most common free fatty acid -PA and soluble immune cell products, as they have individually been shown to be deleterious for beta cells [8,11,18], but their joint effect on beta cells has not been investigated so far. Here, we demonstrate that PA and soluble products of T cells cooperate in beta cell destruction in vitro. We show that they trigger caspase-3-mediated, NO-dependent apoptosis in rat insulinoma cells through activation of p38 mitogen-activated protein-kinase (MAPK). Reagents, cells and cell cultures All reagents were from Sigma (St Louis, MO, USA) and all dishes for culturing cells from Sarstedt (Numbrecht, Germany), unless stated differently. To make the stock solution for further dilution in RPMI 1640, palmitic acid (PA) was mixed overnight at 37°C in Krebs Ringer HEPES buffer containing 20% BSA (fraction V, Roche, Basel, Switzerland). The following recombinant cytokines were used in the experiments: rat IFN-γ (R&D Systems, MI, USA, 10 ng/ml), mouse IL-17 (R&D Systems 50 ng/ml), mouse IL-1β (10 ng/ml), rat TNF-α (10 ng/ml), human IL-2 (100 ng/ml), rat IL-6 (R&D Systems, 10 ng/ml). SB202190 was used as a selective inhibitor of p38 MAPK activation and was applied to cell cultures at least 30 minutes prior to additional treatments. Hemoglobin (Hb, 20 mg/ml) was used as the NO quencher, and was applied to cultures at least 1 hour before additional treatments. Rat insulinoma cells RINm5F (RIN) were grown under standard conditions (37°C, 5% CO 2 ) in RPMI 1640 medium supplemented with 5% fetal calf serum (FCS, PAA Chemicals, Pasching, Austria), L-glutamine, 2-mercaptoethanol and antibiotics (culture medium) in tissue culture flasks until reaching approximately 80% confluence. Then, they were detached with trypsin solution (0.25%) and ethylenediaminetetraacetic acid (EDTA, 0.02%) in PBS. Cells were washed and seeded into 96-well flat-bottom plates (1 × 10 4 /well) for the MTT test, cell-based ELISA and Griess reaction, into 24-well plates (1 × 10 5 /well) for co-cultivations, into 6-well plates (2.5 × 10 5 /well) for cytofluorimetric analysis, or tissue culture flasks (25 cm 3 , 1 × 10 6 ) for western blot, ca. 16 hours before treatment. Subsequently, fresh culture medium with appropriate reagents and/or cells was added to RIN cell cultures. Pancreatic islets were isolated from male Dark Agouti (DA) rats using the collagenase digestion method. The pancreata were minced and subsequently incubated with collagenase type V solution (1 mg/ml) in PBS at 37°C for 10 min with vigorous shaking. After incubation, HBSS was added to stop the digestion. The islets were handpicked and seeded for the experiments into 96-well flat-bottom plates (4 × 10 1 /well) in culture medium (10% FCS). The islets were used in experiments after an overnight rest. Lymph node cells (LNC) were isolated from cervical lymph nodes from DA rats and spleen cells from spleens of Albino Oxford (AO) rats, DA rats, CBA mice, Balb/C mice and C57BL/6 mice. For the purification of T cells from LNC of DA rats, anti-rat CD3-biotin conjugated antibody (BD Biosciences), MACS streptavidin microbeads and MACS separation columns were used according to the instructions of the manufacturer (Miltenyi Biotec, Aubum, CA). The obtained cells were more than 98% positive for CD4 or CD8 as deduced by cytofluorometry (FACS Calibur, BD Biosciences), and were stimulated with plate bound anti-CD3 (1 μg/ml) and anti CD28 (1 μg/ml) antibodies (eBi-oscience, San Diego, CA). The population of CD3cells, obtained by the same procedure as cells not bound to CD3-biotin conjugated antibody were more than 98% negative for CD3 (as deduced by cytofluorimetry) and were stimulated with LPS (1 μg/ml, Sigma). Cell free supernatants of CD3 + and CD3cultures were collected after 48 hours of cultivation. RIN cells and LNC (1 × 10 6 / well) were co-cultivated in the absence or presence of tissue culture inserts (Nunc, Denmark). Spleen cell cultures (5 × 10 6 /ml) were stimulated with concanavalin A (ConA, 2.5 μg/ml) for 48 hours and subsequently cell-free supernatants (Sn) were collected. Except in experiments where 10% Sn was combined with 32 μM PA, 40% Sn was employed. α-Methyl-D-mannoside (10 mg/ml) was used to neutralize the biological activity of ConA. The animals were obtained from the breeding facility of the Institute for Biological Research "Siniša Stanković" and were kept under standardized conditions. All experiments were conducted in accordance with local and international legislations regarding the wellbeing of laboratory animals. Cell viability assay In order to asses the viability of RIN cells, pancreatic islets or LNC we used the mitochondrial-dependent reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) to formazan. At the end of appropriate treatments, cell culture supernatants were removed from the plates and MTT solution (1 mg/ml) was applied. Alternatively, pancreatic islets were collected in tubes, spun down, supernatants removed and the cell pellet dissolved in the MTT solution. Finally, in co-cultivation experiments, LNC were removed from cell cultures, spun down, supernatants removed and the residue dissolved in MTT solution. MTT solution was also added to the RIN cells remaining in the plates. Incubation with MTT lasted for 30 minutes at 37°C. Dimethyl sulfoxide (DMSO) was added to the pellet of pancreatic islets, to LNC and to plated RIN cells to dissolve the formazan crystals. The absorbance was measured at 570 nm, with a correction at 690 nm, using an automated microplate reader (LKB 5060-006, LKB, Vienna, Austria). Measurement of NO generation Cytofluorimetric assay was used for direct measurement of NO release with 4-amino-5-methylamino-2',7'-difluorofluorescein (DAF-FM, 1 μM), which was added to cultures 1 hour prior to the end of the treatment period. After washing in PBS, the cells were detached, resuspended in PBS and analyzed (excitation at 488 nm, emission at 510 nm) on a FACS Calibur flow cytometer (BD Biosciences). Apoptosis detection and caspase-3 assay Apoptotic cells were detected using an annexinV-FITC/ EtD-III staining kit (Biotium, Hayward, CA), according to the manufacturer's protocol. Briefly, after treatment, RIN cells were detached, resuspended in Annexin Binding Buffer containing AnnexinV and EtDIII, and incubated in the dark at room temperature for 15 minutes. Subsequently, samples were diluted with four volumes of Annexin Binding Buffer and analysed with FACS Calibur flow cytometer (BD Biosciences) using CellQuest Pro software (BD Biosciences). The activity of caspase-3 was determined in cultures using the Caspase-3 DEVD-R110 Fluorimetric and Colorimetric Assay Kit (Biotium, CA), according to the manufacturer's protocol. The ability of cell lysates to cleave the specific caspase-3 substrate was quantified fluorometrically using an excitation wavelength of 485 nm and an emission wavelength of 535 nm with a microplate reader (Chameleon, Hidex, Turku, Finland). The results are expressed as amount of substrate conversion (μM), deduced from a standard curve generated from known concentrations of the dye R110. Cell-based ELISA The expression of inducible nitric oxide synthase (iNOS) was determined in triplicate by cell-based ELISA using specific antibody (Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA), according to a previously described protocol [19]. Briefly, after adequate treatment RIN cells were fixed in 4% paraformaldehyde (PFA), endogenous peroxidase activity was quenched with hydrogen-peroxide and the cells exposed to primary anti-iNOS antibody 1:200 dilution and secondary HRP-conjugated detection antibody (GE Healthcare) at 1:2000 dilution. The substrate for HRP was 3,3',5,5'-tetra-methylbenzidine and the reaction was stopped with 1 M HCl. The absorbance was measured at 450 nm and the cells were stained with crystal violet in order to correct for differences in cell numbers. The final results were obtained by division of the absorbance at 450 nm after adding the stop solution and the absorbance at 570 nm after the additional crystal violet staining (A 450 /A 570 ). Statistical Analysis Data are presented as the mean +/-SD of values obtained in independent experiments. Student's t-test was performed for analysis of the differences between means observed in the experiments. Cooperative effect of PA and activated immune cell products on rat pancreatic beta cell viability We co-cultivated RIN cells and rat lymph node cells (LNC) stimulated with ConA (2.5 μg/ml) in the absence or presence of PA. In preliminary experiments the PA concentration of 125 μM had a significant, yet limited effect on RIN cell viability (Fig 1A), so this dose was used in subsequent experiments. After 20 hours of treatment, either PA or activated LNC decreased RIN cell viability, but the effect was markedly greater if RIN cells were co-cultivated with activated LNC in the presence of PA ( Fig 1B). Interestingly, if direct RIN-to-LNC contact was prevented with tissue culture inserts, LNC were unable to reduce RIN cell viability, but they still cooperated with PA in reducing RIN cell number (Fig 1C). This result implied that soluble mediators produced by activated immune cells cooperated with PA in inducing RIN cell death. Such an assumption was confirmed, as supernatants obtained from cultures of ConA-stimulated rat spleen cells (Sn) also efficiently cooperated with PA in RIN cell destruction ( Fig 1D). In order to eliminate the possibility that the observed effects of Sn were mediated through direct action of ConA upon RIN cells, Sn were treated with α-methyl-Dmannoside (10 mg/ml) with the aim of blocking ConA activity. The blocker did not interfere with the effect of Sn on RIN cell viability (data not shown), thus excluding the possibility of direct influence of ConA on RIN cells. Importantly, Sn and PA also cooperated in the reduction of pancreatic islets isolated from DA rats (Fig 1E), thus showing that the observed effect was not specific for the transformed insulinoma cell line, but that it had relevance for beta cells in general. Finally, with the purpose of investigating if lower concentrations of PA and Sn also cooperate in reduction of RIN cell viability, RIN cells were treated with 32 μM PA and 10% Sn. As presented in Fig 1F, individually PA and Sn had just a limited inhibitory effect on RIN cells (12% and 11.4%, respectively) but their joint effect was pronounced (44.6%). This suggested that PA and Sn acted synergistically upon RIN cells. Sn and PA induce apoptosis of RIN cells Our next aim was to explore if the observed decrease in RIN cell viability was a consequence of apoptosis induction. Therefore, we analyzed AnnexinV-FITC and Et-DIIIstained RIN cells cytofluorimetrically. While PA or Sn obtained from DA rat spleen cells applied alone did not induce significant apoptosis in RIN cells after 6 hours of incubation, simultaneous application of PA and Sn induced early apoptotic changes in more than 20% of the cells (Fig 2A, C-F). Importantly, the pro-apoptotic effect was also observed after just 2 hours of treatment (5-10% of apoptotic cells), showing the rapidity of the influence. These results once again indicated that PA and Sn cooperate in a synergistic fashion. Moreover, increased activity of the pro-apoptotic cleaving enzyme caspase-3 was found in RIN cells subjected to the simultaneous treatment with PA and Sn (Fig 2B). Thus, we could conclude that concurrent application of PA and Sn decreased RIN cell viability through caspase-3 dependent induction of apoptosis. T cell cytokines cooperate with PA in the reduction of RIN cell viability In order to investigate the nature of the component(s) of Sn responsible for the observed cooperation with PA, Sn made from C57BL/6 mouse spleen cells were tested for efficiency in cooperation with PA. Importantly, Sn obtained from C57BL/6 mouse spleen cells were as efficient in the reduction of RIN cell viability as Sn obtained from rat cells, both individually and in cooperation with PA ( Fig 3A). Moreover, similar efficiency was observed for Sn obtained from CBA and Balb/C mice or AO rats (data not shown). Thus, we could conclude that the soluble factor(s) involved in the cooperation with PA were not species or strain-specific. Furthermore, we tested the resistance of the soluble component(s) to heat by boiling Sn for 10 min before use in the experiments. Exposure of Sn to heat led to loss of the cooperative cytotoxic action of Sn and PA (Fig 3B) indicating that heat-sensitive components were responsible for the cooperation with PA in RIN cell death induction. The possibility that cytokines present in the Sn were involved in this cooperation was investigated by treating RIN cells with the combination of IL-1β, IFN-γ, TNF-α, IL-17, IL-2 and IL-6 in the absence or presence of PA. This combination of cytokines alone reduced RIN cell number and also cooperated with PA in the reduction of RIN cell viability (Fig 3C). The effect was similar to that of Sn, so we could conclude that cytokines might be responsible for the cooperative reduction of RIN cell viability by Sn and PA. Finally, since Sn were products of mixed populations of cells, including T cells, macrophages, B cells and other cell types, LNC were separated into CD3 + cells (T cells) and CD3cells (all the other cells), by magnetic bead purification, and the obtained populations were used for production of Sn, in order to determine their relative contribution to the observed cooperation. Importantly, Sn obtained from CD3 + cells stimulated with anti-CD3 and anti-CD28 antibodies cooperated with PA in the reduction of RIN cell viability (Fig 3D). On the other hand, Sn obtained from CD3cells stimulated with LPS did not cooperate with PA in RIN cell destruction. Hence, we could conclude that T cells were producers of the cytokines responsible for the cooperation with PA. PA+Sn-induced RIN cell death is dependent on p38 MAPK activation Our further goal was to determine if p38 MAPK-signaling is involved in PA+Sn-induced apoptosis in RIN cells. As the first step, SB202109, a selective p38 MAPK inhibitor, was applied to RIN cultures treated with the combination of PA and Sn, and this inhibitor was shown to prevent PA+Sn-triggered apoptosis in RIN cells (Fig 4A). Accordingly, PA+Sninduced p38 MAPK activation was detected in RIN cells ( Fig 4B), thus supporting the importance of p38 induction for PA+Sn induced apoptosis in RIN cells. However, PA, but not Sn, was able to activate p38 to the levels observed in cultures treated with both PA and Sn. Thus, it seems that p38 activation is not enough for efficient initiation of RIN cell apoptosis, but that it is necessary for induction of the death of these cells by coordinate action of PA and Sn. Hemoglobin protects RIN cells from PA+Sn-induced apoptosis We also explored if the effect of PA and Sn on RIN cell viability was indirect, through induction of nitric oxide (NO) production in these cells. Inducible nitric oxide synthase (iNOS) expression measured by cell-based ELISA showed that PA+Sn induced expression of this protein in RIN cells, after 2 and 6 hours of incubation (Fig 5A). Additionally, the treatment led to NO production in RIN cells, while hemoglobin, a NO quencher, down-regulated the levels of free NO (Fig 5B) and accordingly protected RIN cells from apoptotic cell death induced by PA and Sn ( Fig 5C). Importantly, the p38 signaling inhibitor -SB202109 that protected RIN cells from the cytotoxicity also inhibited NO release in RIN cells (Fig 5D). Thus, these results clearly suggest that NO produced in RIN cells in a p38dependent manner contributes to PA+Sn-induced RIN cell apoptosis in a major way. Discussion In the present paper we show that cytokines produced by T cells cooperate with a saturated free fatty acid in the induction of beta cell apoptosis. The observed cooperation is synergistic both at the level of the intensity and the speed of the action. Importantly, the cooperative effect was also observed if primary pancreatic islets were used as the target population, thus excluding the possibility that the cytotoxicity was specific for the transformed cell line. Apoptotic cell death of RIN cells under the influence of PA and Sn There is an increasing number of reports indicating the importance of saturated fatty acids or soluble products of immune cells, including cytokines and NO in the reduction of beta cell mass during diabetes pathogenesis [8,11,18]. Nevertheless this is the first report to our knowledge, that describes cooperation between a saturated fatty acid and soluble immune mediators in the destruction of pancreatic beta cells. The fact that soluble mediators cooperate with PA is clearly shown in the experiments where RIN cells and LNC were separated with tis-sue-culture inserts, and substantiated with the corresponding results obtained with cell-free Sn. Moreover, similar efficiency of Sn from various strains of rats and mice, clearly suggests that product(s) responsible for the observed cooperation are not species-or strain-specific. Furthermore, the inability of heat-treated Sn to cooperate with PA in RIN cell destruction implies that the soluble product(s) are heat-sensitive, i.e. proteins. The obvious protein candidates are cytokines, as IL-1β, IL-6, IFN-γ and TNF-α have been shown to be efficient in β-cell destruction [8,11,17,18]. Additionally, we previously reported that IL-17 contributes to NO-dependent cytotoxicity of beta cells [20]. Moreover, one of the major constituents of Sn -IL-2 was found to cooperate with PA in the activation Cooperation of mouse Sn, heated Sn, cytokines, CD3 + Sn with PA in reduction of RIN cell viability Figure 3 Cooperation of mouse Sn, heated Sn, cytokines, CD3 + Sn with PA in reduction of RIN cell viability. RIN cells were cultivated in the absence (medium) or presence of 125 μM PA and/or 40% C57Bl/6 mouse Sn (SnM -A), and/or 40% Sn or Sn boiled for 10 minutes (SnB-B), and/or 10 ng/ml IL-1β, 10 ng/ml IL-6, 10 ng/ml IFN-γ, 10 ng/ml TNF-α, 50 ng/ml IL-17 and 100 ng/ml IL-2 (cytokines -C) and/or 40% Sn obtained from CD3 + LNC or CD3 -LNC (SnCD3 + , SnCD3 --D). MTT assay was performed after 20 hours of cultivation and the results are presented as the percentage of control absorbance values obtained in cultures grown in medium alone. Mean values +/-SD of values obtained in 11 (A), 7 (B), 5 (C) and 4 (D) individual experiments with similar results are presented. *p < 0.05 represents a statistically significant difference between values obtained from cultures of RIN cells treated with PA and SnM (A) or PA and Sn (B) or PA and cytokines (C) or PA and SnCD3 + (D) and any other culture of RIN cells. of Jak-3 and STAT-5 in human lymphocytes [21]. Therefore, we used the combination of IL-1β, IL-6, IFN-γ, TNFα, IL-17 and IL-2 and were able to mimic the effects of Sn on RIN cells. Finally, the observed ability of Sn obtained from pure CD3 + , but not CD3cells, to cooperate with PA in reducing RIN cell viability indicates that T cells are the major cell population responsible for the production of the soluble product(s) which cooperate with PA in the process. Importantly, the cytokines we used to mimic the effect of Sn are either typical T-cell products (IL-2, IFN-γ, IL-17) or they can be produced by various cells including T cells (IL-6, IL-1β, TNF-α). This is in agreement with the observed ability of pure CD3 + cell culture supernatants to affect RIN cell viability. Also, if we take into account that some cytokines (IL-6, IL-1β, TNF-α) can be produced by B cells, macrophages and other CD3cells, and that production of all of the cytokines by T cells could be supported by CD3cells, it becomes clear why CD3 + cells produced a "weaker" supernatant than lymph node cells. Moreover the inability of CD3cells to provide a supernatant effective against RIN cells, leaves us with T lymphocytes as the prime suspects for the cooperation with fatty acids in the destruction of beta cells in diabetes. Furthermore, we can conclude that the major signaling pathway responsible for the induction of apoptosis in RIN cells under the coordinated influence of PA and Sn is the p38 MAPK pathway. This conclusion is based on the p38activation observed under the influence of PA and Sn, as well as on the complete protection of RIN cells from the apoptosis induction by the specific p38 signaling inhibitor. It is well known that p38 is involved in induction of apoptosis [22], and more specifically in the induction of apoptosis of beta cells, under the influence of both cytokines [23,24] and PA [25]. In our hands PA, but not Sn, was able to activate p38 in RIN cells. However, although PA potently induced p38 in RIN cells, it did not cause massive apoptosis. Importantly, with the same level of p38 activation under the cooperative action of PA and Sn, a substantial proportion of RIN cells underwent apoptotic changes, while inhibition of the p38 signaling pathway prevented the induction of apoptosis in RIN cells. Thus, it seems that activation of p38 per se is not enough for apoptosis induction in RIN cells, but at the same time p38 activation is probably necessary for the observed cooperative induction of apoptosis in these cells. Finally, the major effector molecule responsible for RIN cell apoptosis may be NO. Namely, the NO scavenger Hb protects RIN cells from the deleterious effect of PA and Sn, while the p38 signaling inhibitor, which inhibits PA+Snimposed apoptosis of RIN cells, also prevents PA+Sninduced NO synthesis in these cells. The importance of NO for induction of beta cell apoptosis by cytokines has been repeatedly reported, while the role of NO in fatty acid induced death of beta cells is still controversial. For instance, it was shown that IL-1 or TNF-α induced cell death in INS-1 cells and that IL-1, but not palmitate or oleate potently induced iNOS gene expression and NO generation in these cells [8]. There are, however, findings indicating the ability of primary beta cells to produce NO and express both constitutive and inducible isoforms of NOS under the influence of PA [25][26][27]. In our experiments, massive production of NO was observed in RIN cells under the coordinated action of PA and Sn. The novelty of our finding is that PA could contribute to NO induction in the inflammatory milieu, represented in our experiments by Sn. Generally, it is thought that NO- dependent destruction of beta cells is characteristic of T1D, where immune cells infiltrate pancreatic islets and produce NO-inducing cytokines, while on the contrary fatty acids do not provoke NO-dependent cell death in T2D [28]. Here, we show that PA is able to cooperate with Sn in inducing NO synthesis. Thus, it seems reasonable to assume that free fatty acid levels could be very important for the destruction of beta cells during ongoing inflammation in the pancreas. Such a reasoning favors hypotheses that inflammation is important for the development of T2D, as well as that obesity is a predisposing factor for T1D [4,5]. Taken together, as it was shown that both NO and p-38 signaling contribute to apoptosis induction in RIN cells under the influence of PA and Sn in a major way, it is tempting to speculate that in our system the cooperative effect of PA and Sn is dependent on activation of p38, which in turn leads to generation of NO and destruction of RIN cells. Once NO is generated, p38 might play an important role in the predisposition of RIN cells to NOimposed apoptosis, as genetic down-regulation of p38α was previously shown to lower the sensitivity of beta cells to cell death induced by a NO donor [24]. Importantly, besides its role in classical induction of apoptosis, p38 activity has been assigned a major role in endoplasmatic reticulum (ER) stress-induced apoptosis [29]. Also, NO produced in beta cells under the influence of cytokines has been shown to contribute largely to the induction of ER-stress in beta cells [30]. Thus, the possibility of the involvement of the ER stress components in the observed effect of PA and cytokines on beta cells should be investigated in the future. Conclusion Our results imply that limited hyperlipidemia and inflammation, when acting in concert, may carry out a powerful attack upon pancreatic beta cells. The cooperative cytotoxic effect of palmitate and soluble T cell products described herein has clear significance for understanding the pathogenesis of diabetes, as it suggests that there might be direct cooperation between factors of obesity and inflammation in beta cell destruction. However, as our data are from a simplified in vitro experimental sys-PA and Sn induce RIN cell apoptosis through induction of nitric oxide production
2014-10-01T00:00:00.000Z
2009-05-22T00:00:00.000
{ "year": 2009, "sha1": "adebc029a487f88d57bd3995b954ece8f4b53361", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1471-2172-10-29", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adebc029a487f88d57bd3995b954ece8f4b53361", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119715748
pes2o/s2orc
v3-fos-license
On Mockenhoupt's Conjecture in the Hardy-Littlewood Majorant Problem The Hardy-Littlewood majorant problem has a positive answer only for expo- nents p which are even integers, while there are counterexamples for all p =2 2N. Montgomery conjectured that even among the idempotent polynomials there must exist some counterex- amples, i.e. there exist some finite set of characters and some ? signs with which the signed character sum has larger pth norm than the idempotent obtained with all the signs chosen + in the character sum. That conjecture was proved recently by Mockenhaupt and Schlag. However, Mockenhaupt conjectured that even the classical 1 + e2?ix ? e2?i(k+2)x three- term character sums, used for p = 3 and k = 1 already by Hardy and Littlewood, should work in this respect. That remained unproved, as the construction of Mockenhaupt and Schlag works with four-term idempotents. In our previous work we proved this conjecture for k = 0; 1; 2, i.e. in the range 0<p<6, p =2 2N. Continuing this work here we demonstrate that even the k = 3; 4 cases hold true. Several refinement in the technical features of our approach include improved fourth order quadra- ture formulae, finite estimation of G02=G (with G being the absolute value square function of an idempotent), valid even at a zero of G, and detailed error estimates of approximations of various derivatives in subintervals, chosen to have accelerated convergence due to smaller radius of the Taylor approximation. Introduction We denote, as usual, T := R/Z the one dimensional torus or circle group. Following Hardy and Litlewood [7], f is said to be a majorant to g if | g| ≤ f . Obviously, then f is necessarily a positive definite function. The (upper) majorization property (with constant 1) is the statement that whenever f ∈ L p (T) is a majorant of g ∈ L p (T), then g p ≤ f p . Hardy and Littlewood proved this for all p ∈ 2N -this being an easy consequence of the Parseval identity. On the other hand already Hardy and Littlewood observed that this fails for p = 3. Indeed, they took f = 1 + e 1 + e 3 and g = 1 − e 1 + e 3 (where here and the sequel we denote e k (x) := e(kx) and e(t) := e 2πit , as usual) and calculated that f 3 < g 3 . The failure of the majorization property for p / ∈ 2N was shown by Boas [3]. Boas' construction exploits complex Taylor series expansion around zero: for 2k < p < 2k + 2 the counterexample is provided by the polynomials f, g := 1 + re 1 ± r k+2 e k+2 , with r sufficiently small to make the effect of the first terms dominant over later, larger powers of r. Date: May 5, 2014. Supported in part by the Hungarian National Foundation for Scientific Research, Project #'s K-81658 and K-100461. 1 Utilizing Riesz products -an idea suggested to him by Y. Katznelson -Bachelis proved [2] the failure of the majorization property for any p / ∈ 2N even with arbitrarily large constants. That is, not even g p < C p f p holds with some fixed constant C = C p . Montgomery conjectured that the majorant property for p / ∈ 2N fails also if we restrict to idempotent majorants, see [11, p. 144]. (A suitable integrable function is idempotent if its convolution square is itself: that is, if its Fourier coefficients are either 0 or 1.) This has been recently proved by Mockenhaupt and Schlag in [10]. Oddly enough, the quite nice, constructive example is given with a four-term idempotent polynomial, although trinomials may seem simpler objects to study. Indeed, there is a considerable knowledge, even if usually for the maximum norm, on the space of trinomials, see e.g. [6,12,13]. Note that three-term examples are the simplest we can ask for, as twoterm polynomials can never exhibit failure of the majorization property. In the construction of Mockenhaupt and Schlag, however, the key role is played by the fact that the given 4-term idempotent is the product of two two-term idempotents, the p th power integral of which then can be expressed by the usual trigonometric and hyperbolic functions. So even if four terms is a bit more complicated, but the product form gives way to a manageable calculation. Nevertheless, one may feel that Boas' idea, i.e. the idea of cancelation in the (k + 1) st Fourier coefficients works even if r is not that small -perhaps even if r = 1. The difficulty here is that the binomial series expansion diverges, and we have no explicit way to control the interplay of the various terms occurring with the ± signed versions of our polynomials. But at least there is one instance, the case of p = 3, when all this is explicitly known: already Hardy and Littlewood [7] observed that failure of the majorant property for p = 3 is exhibited already by the pair of idempotents 1 + e 1 ± e 3 . In fact, this idempotent example led Montgomery to express (in a vague form, however, see [11], p. 144) his conjecture on existence of idempotent counterexamples. There has been a number of attempts on the Montgomery problem. In particular, led by the examples of Hardy-Littlewood and Boas, Mockenhaupt [9] expressed his view that 1 + e 1 ± e k+2 , where 2k < p < 2k + 2, should provide a counterexample in the Hardy-Littlewood majorant problem, (at least for k = 1, 2). So we are to discuss the following reasonably documented conjecture. We have proved this for k = 0, 1, 2 in [8]. One motivation for us was the recent paper of Bonami and Révész [4], who used suitable idempotent polynomials as the base of their construction, via Riesz kernels, of highly concentrated ones in L p (T) for any p > 0. These key idempotents of Bonami and Révész had special properties, related closely to the Hardy-Littlewood majorant problem. For details we refer to [4]. For the history and relevance of this closely related problem of idempotent polynomial concentration in L p see [4,5], the detailed introduction of [8], the survey paper [1], and the references therein. As already hinted by Mockenhaupt's thesis [9], proving that 1 + e 1 ± e k+2 would be a counterexamle in the Hardy-Littlewood majorant problem may require some numerical analysis as well. However, we designed a way to accomplish this differently than suggested by Mockenhaupt, for we don't know how to get it done along the lines hinted by him. Instead, in [8] we used function calculus and support our analysis by numerical integration and error estimates where necessary. These methods are getting computationally more and more involved when k is getting larger. Striving for a worst-case error bound in the usual Riemann numerical integration formula forces us to consider larger and larger step numbers (smaller and smaller step sizes) in the division of the interval [0, 1/2], where a numerical integration is to be executed. Therefore, for k getting larger, we can as well expect the step numbers increase to a numerically extraneous amount, where calculations loose liability in view of the possibly accumulating small errors of the computation of the operations and regular function values -powers, logarithms and trigonometrical or exponential functions -involved. Any reader would readily accept a proof, which with a certain precise error estimate refers to a numerical integration formula on say a few hundred nodes, but perhaps no reader would be fully convinced reading that a numerical tabulation and integration on several tens of thousands of function values led to the numerical result. Correspondingly, in this paper we settle with the goal of keeping any numerical integration, i.e quadrature, under the step number (or number of nodes, division number) N = 500, that is step size h = 0.001. Calculation of trigonometrical and exponential functions, as well as powers and logarithms, when within the numerical stability range of these functions (that is, when the variables of taking negative powers or logarithms is well separated from zero) are done by mathematical function subroutines of usual Microsoft excel spreadsheet, which computes the mathematical functions with 15 significant digits of precision. Although we do not detail the estimates of the computational error of applying spreadsheets and functions from Microsoft Excel tables, it is clear that under this step number size our calculations are reliable well within the error bounds. For a more detailed error analysis of that sort, which similarly applies here, too, see our previous work [8], in particular footnote 3 on page 141 and the discussion around formula (22). In view of the above considerations, instead of pushing forward exactly the same numerical analysis as done in [8] for k = 1, 2 also for higher values of k, (which could have been done at least for some k, though), here we renew the approach and invoke a number of new features of the numerical analysis. These "tricks" will enable us to keep N below 500, and thus keep the invoked numerical calculations of quadratures reliable. First, instead of the classical and simplest numerical integration by using "brute force" Riemann sums, we apply a more involved quadrature formula (11), derived from Taylor approximation, which in turn allows us to keep the step number under good control. Here instead of the most famous Simpson rule, which uses only function values, we prefer a somewhat more involved quadrature, calculating the approximate value of the integral by means of using also the values of the second derivative of the integrand. The gain is considerable even if not in order, but in the constant of the error formula. Second, as already suggested in the conclusion of [8], we apply Taylor series expansion at more points than just at the midpoint t 0 := k + 1/2 of the t-interval (k, k + 1). This reduces the size of powers of (t − t 0 ), from powers of 1/2 to powers of smaller radii. The Taylor polynomial of degree 7, considered in [8], had error size 2 −8 due to the contribution of |ξ − t 0 | 8 in the Lagrange remainder term, while here for k = 4 the division of the t-interval to (4, 4.5) and (4.5, 5) results in O(4 −n ) in the respective error contribution. Notations and a few general formula for the numerical analysis Let k ∈ N be fixed. (Actually we will work with k = 3 or k = 4 only.) To set the framework, here we briefly sketch the general scheme of our argument, and exhibit a number of general formulae for later use in the analysis. Let us introduce a few further notations. We will write t := p/2 ∈ [k, k + 1] and put So we are to prove that d(t) > 0 for k < t < k + 1. First we derive that at the endpoints d vanishes; and, for later use, we also compute some higher order integrals of G ± . Consequently we have We encounter a new phenomenon, compared to [8], when k = 3, since here G + (x) does not have a positive lower bound: we in fact have G + (1/3) = 0. (Let us note in passing that for G − we have min T G − ≈ 0.282... > 1/4 -but we do not use this in the following.) For higher x-derivatives of the composite functions G t + log j G + , needed in our analysis, vanishing of G + causes concerns for occurring negative powers of G + after differentiation, while the appearance of log j G + invoke concerns of blowing up calculations and estimates in view of "log 0 = ∞". The first problem we resolve by a comparison of G + to G ′2 + , always present in the numerator, while the second difficulty will be taken care of by using only continuous functions v a log b v, with a > 0, b ≥ 0, of v = G + (x). Although all this can be avoided, when G − is strictly bounded away from zero, for a possibly better estimation we still calculate the same comparative estimates even for G − . (Similarly, the idea of comparison of G ′2 ± and G ± could be used for higher k as well, whether or not the functions G ± vanish.) So we want to compare G ′ and √ G = |F |, more precisely G ′2 and G. Note that Another heuristical reasoning to justify the search for a bound of G ′2 /G, is that G ≥ 0, hence whenever G = 0 we necessarily have G ′ = 0, and the multiplicity m of any zero of G being an integer (as G is an entire function), we conclude m ≥ 2: so G ′2 has a zero of order 2(m − 1) ≥ m. So we start the search for a bound on G ′2 ± /G ± . To this end we write u = cos v with v = 2πx and calculate where U m (u) := sin((m + 1)v) sin v (v := arccos u) is the m-th Chebyshev polynomial of the second kind. We are to compare this and where here T m (u) = cos(mv) (v := arccos u) is the mth Chebyshev polynomial of the first kind. In all, G ′2 /G is always an entire function of x, and substituting u = cos v = cos 2πx we have the formula In the paper [8] we used Riemann sums and the standard Riemann sums approximation formula A new feature of the present approach is that for better approximation we now improve the numerical integration method by means of invoking a quadrature formula. This was not feasible for small t, as higher derivatives of the composite function H lead to G in the denominator: the m th derivative in general results in the occurrence of G t−m , and negative powers of G bear the risk of blowing up all of our estimates. This can be remedied a little by comparison of G ′2 to G, a lucky possibility explained above. This was already utilized in [8] to control 2 nd derivatives of H, and we'll make use of it here, too for k = 3, when for some integrals (some occurring H functions) t can be as small as t = 3, while we need to control 4 th derivatives of H in view of error terms of the quadrature formula we use. With this additional consideration the 4 th derivatives of H can always be controlled for k ≥ 3. (For k = 0, 1, 2, settled in [8], this could not have been possible.) For remaining self-contained, we deduce here the otherwise well-known quadrature formula what we want to apply. This starts with the 3 rd order Taylor polynomial approximation (with the so-called Lagrange error term), valid for four times continuously differentiable functions ϕ: Integrating over a symmetric interval [x 0 − q, x 0 + q] leads to Applying the same formula for N intervals of the form This leads to the following quadrature formula. 1 Lemma 5. Let ϕ be a four times continuously differentiable function on [0, 1/2]. Then we have Let us start analyzing the functions To find the maximum norm of H t,j,± , we in fact look for the maximum of an expression of the form v t | log v| j , where v = G(x) ranges from zero (or, if G = 0, from some positive lower bound) up to G ∞ ≤ 9. For that, a direct calculus provides the following. In particular for [a, b] = [0, 9] we always have For the application of the above quadrature (11) we calculate (c.f. also [8]) However, the error estimation in the above explained quadrature approach forces us to consider even fourth x-derivatives of H = H t,j,± . In order to calculate we start with computing Inserting (17) and (18) into (16) we arrive at the desired general formula for H IV t,j,± as follows At all occurrences we will need an estimate for H IV ∞ in order to apply it in the numerical quadrature formula. Therefore, we now start estimating the above expression. For a shorter notation we write v := G(x) ∈ [0, 9] and ℓ := |L| = | log v|. As a first step we thus find for j = 1, 2, 3, ..., t ≥ 3 the estimates Furthermore, to be used typically for smaller values of v = G(x), that is to say only for 0 ≤ v ≤ 3, we can derive a different estimation whenever some constant M * := M * (k) is known satisfying G ′2 /G ∞ ≤ M * . Namely, we then have Furthermore, estimating by means of Λ := max(ℓ, 1) and using ℓ j−1 , ℓ j−2 , ℓ j−3 , ℓ j−4 ≤ Λ j and also 2 √ v ≤ 1 + v we are led to 3. The proof of the k = 3 case of Conjecture 2 When k = 3, let us start with a few concrete numerical estimates of the functions G ± . For k = 3 we need U 3 (u) = 4u(2u 2 − 1), U 4 (u) = 16u 4 − 12u 2 + 1 and T 4 (u) = 8u 4 − 8u 2 + 1, T 5 (u) = 16u 5 − 20u 3 + 5u. Writing these in (9) yields for G + G ′2 canceling the common factors of (2u + 1) 2 . Note that the denominator is now non-vanishing in the interval [−1, 1], as its minimum is ≈ 0.12, attained at 1 + √ 13 6 ≈ 0.76759.... Thus the above rational function can be maximized numerically on the range u ∈ [−1, 1] of u = cos(2πx), the maximum being ≈ 3699, so . Although G − does not vanish, for a possibly better estimation for small values of G − (x), we still work out a bound on G ′2 − /G − . Again with u = cos v and v = 2πx we get from (9) max Note that now the denominator does not vanish and there is no singularity to make the numerical maximization difficult. Summing up, we find The next step is, as in [8], to see that d (j) (3) > 0 for the first few values of j = 1, 2. Proof. From (2) we clearly have Now we calculate the value -that is, these two integrals -numerically for t = k = 3 and j = 1. Both integrals should be computed within the error bound δ := 0.007. Invoking Lemma 5 we are left with the estimation of H IV 3,1,± ∞ . The general formula of (19) now specializes to We now estimate |H IV (x)| distinguishing two cases, the first being when v := G(x) ≥ 3. Inserting the estimates of G (m) ∞ from (7) for m = 0, 1, 2, 3, 4, we get from (28) For smaller values of v = G(x) we estimate (28) the same way as it is done in general in (21), with M m in (7) and M * in (26) (or, we substitute t = k = 3 and j = 1 in (21) and use the numerical values of M m and M * as said). This yields |H IV (x)| ≤ 1.2 · 10 9 v + (6.7v log v + 1.2v 3/2 + 1.4v 3/2 log v) · 10 8 + (2.8v 2 + 8.3v 2 log v) · 10 6 . Our next aim will be to show that d ′′ is concave in [3,4], i.e. that d IV < 0. That will be the content of Lemma 12. To arrive at it, our approach will be a computation of some approximating polynomial, which is, apart from a possible slight and well controlled error, a Taylor polynomial of d IV . Now we must set δ 0 , . . . , δ 10 , too. So let now δ j = 0.005 for each j = 0, . . . , 10. The goal is that the termwise error (37) would not exceed δ j , which will be guaranteed by N j step quadrature approximation of the two integrals defining d (j+4) (7/2) with prescribed error η j each. Therefore, we set η j := δ j 2 j j!/2, and note that in order to have (37) it suffices that 60 · 2 10 j!2 j−1 δ j according to Lemma 5. So at this point we estimate H IV 7/2,j+4,± ∞ for j = 0, . . . , 10 to find appropriate values of N ⋆ j . Lemma 11. For j = 0, . . . , 10 we have the numerical estimates of Table 1 for the values of H IV 7/2,j+4,± ∞ . Setting δ j = 0.005 for j = 0, . . . , 10 the approximate quadrature of order 500 =: N =: N j ≥ N ⋆ j with the listed values of N ⋆ j yield the approximate values d j as listed in Table 1, admitting the error estimates (37) for j = 0, . . . , 10. Furthermore, R 10 (d IV , t) ∞ < 0.011 =: δ 11 and thus with the approximate Taylor polynomial P 10 (t) defined in (38) the approximation |d IV (t) − P 10 (t)| < δ := 0.068 holds uniformly for t ∈ [3,4]. Proof. We start with the numerical upper estimation of H IV 7/2,j,± (x) for 3 ≤ x ≤ 4, where now in view of the shift of indices we need the estimation for 4 ≤ j ≤ 14. All what follows is not sensitive to j ≤ 14, but it is convenient that j ≥ 4, as otherwise in some derivatives the powers of L(x) = log G(x) would diminish, changing the formula slightly. Proof of the k = 3 case of Conjecture 2. Since d(3) = d(4) = 0, and d ′ (3) > 0, d takes some positive values close to 3; so in view of Lagrange's (Rolle's) theorem, d ′ takes some negative values as well. Therefore, d ′ decreases from a positive value at 3 to some negative value somewhere later; it follows that d ′′ takes some negative values in (3,4). Also, d ′′ is concave and d ′′ (3) > 0 implies that d ′′ changes from positive values towards negative ones; by concavity, there is a unique zero point τ of d ′′ in (3,4), where d ′′ has a definite sign change from positive to negative. It follows that d ′ , starting with the positive value at 3, first increases, achieves a maximal positive value at τ , and then it decreases, reaches zero and then eventually negative values, as seen above. That is, when it becomes zero at some point σ, it already has a negative derivative, and it keeps decreasing from that point on. So d ′ is positive until σ, when it has a strict sign change and becomes negative until 4. Therefore, d increases until σ and then decreases till 4; so d forms a cap shape and it is minimal at the endpoints 3 and 4, where it vanishes. It follows that d > 0 in (3,4). This concludes the proof of the k = 3 case of Conjecture 2. So we arrive at the analysis of d V . Numerical tabulation of values give that d V is decreasing from d V (4) ≈ −2, 217868... to even more negative values as t increases from 4 to 5. So we now set forth proving that d V < 0 in [4,5]. To arrive at it, our approach will be a computation of some approximating polynomial p(t), which is, within a small and well controlled error, will be a Taylor polynomial of d V (t). However, as we intend to keep the step number N of the numerical integration under 500, we take the liberty of approximating d V by different polynomials (using different Taylor expansions) on various subintervals of [4,5]. More precisely, we divide the interval [4,5] into 2 parts, and construct approximating Taylor polynomials around 4.25 and 4.75. Finally, we collect the resulting numerical estimates of H IV in Table 2 and list the corresponding values of N ⋆ j and d j , too, as given by the formulae (50) and the numerical quadrature formula (11) with step size h = 0.001, i.e. N = N j = 500 steps. Proof. We start with the numerical upper estimation of H IV 4.75,j,± (x) for 4.5 ≤ x ≤ 5. In (20) now we insert t = 4.75, use again the estimates (7) of M 1 − M 4 and ℓ < 3.7 and arrive at |H IV 4.75,j,± (x)| < 3.7 j 1.44 · 10 7 j 4 + 1.23 · 10 9 j 3 + 3.93 · 10 10 j 2 + 5.7 · 10 11 j + 3.18 · 10 12 . Finally, we collect the resulting numerical estimates of H IV in Table 3 and list the corresponding values of N ⋆ j and d j , too, as given by formulae (54) and the numerical quadrature formula (11) with step size h = 0.001, i.e. N = N j = 500 steps. Proof. We approximate d V (t) by the polynomial P 6 (t) constructed in (53) as the approximate value of the order 6 Taylor polynomial of d V around t 0 := 4.75. As the error is at most δ = 39.9, it suffices to show that p(t) := P 6 (t) + δ < 0 in [4.5, 5]. Now . We have already checked that p (j) (4.5) < 0 for j = 0 . . . 5, so in order to conclude p(t) < 0 for 4.5 ≤ t ≤ 5 it suffices to show p (6) (t) < 0 in the given interval. However, p (6) is constant, so p (6) (t) < 0 for all t ∈ R. It follows that also p(t) < 0 for all t ≥ 4.5. Conclusion With the help of the sharper quadrature formula (11) further numerical analysis is possible for higher values of k. In principle we can divide the interval (k, k + 1) to smaller and smaller intervals to get improved error estimations of Taylor expansions to compensate the larger and larger error bounds resulting from e.g. (7) and the increase of t. We have a strong feeling that this way we could work further to higher values of k. However, even that possibility does not mean that we would have a clear theoretical reason, a firm grasp of the underlying law, rooted in the nature of the question, for what the result should hold for all k.
2012-03-11T21:27:43.000Z
2012-03-11T00:00:00.000
{ "year": 2013, "sha1": "b01dfef4d54889e71bcbd34dd9007ca8369d8c05", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.2378", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b01dfef4d54889e71bcbd34dd9007ca8369d8c05", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259743806
pes2o/s2orc
v3-fos-license
Vision for the Post-Pandemic Education in BRICS This article briefly explores the pandemic’s impact on higher Education in BRICS member states. Attention is brought to the measures BRICS nations adopted to continue providing quality education despite the imposed restrictions and challenges. As explained in the text, transformations imposed by the pandemic affected the functioning of the entire education system and adapted responses depended on the available resources and overall capacity to adapt to the crisis by individual institutions and contexts. Furthermore, the transformations pointed out the existing inequalities and many unknowns for which various educational stakeholders were not always prepared. As evident from the account, strategic planning must better prepare education systems for emergencies such as the Covid-19 pandemic. It is also paramount that the narrative of the strategic planning and the vision for the future focuses on thriving instead of surviving. Introduction At the outset, BRICS cooperation was grounded in economic and political collaboration; however, with time, it evolved to embrace also cultural exchanges, Education and scientific initiatives, as well as technological advancements (Sun & Yang, 2021). Zooming in on higher Education, teaching and research collaboration among BRICS countries has been active since 2013. In this endeavour, in November 2013, the first BRICS Education Ministers Meeting was held at the 37 th UNESCO General Conference, hosted in Paris. The meeting initiated the process of multidimensional educational cooperation among the BRICS countries, paying special attention to higher Education. As Sun and Yang (2021) noted, research and capacity building in higher Education is currently promoted by two major multidimensional cooperation platforms, namely, BRICS Network University and the BRICS University League. The establishment and evolution of BRICS is a unique, large-scale collaboration between countries, driven by improved Education towards social, environmental and economic sustainability of society. Impact of the Pandemic on BRICS Higher Education Institutions COVID-19 impacted all aspects of life in countries across the world. In terms of Education, the impact was especially evident in teaching and research practices, which required emergency planning for teaching and researching under the imposed restrictions, provision of funding to support the required changes and often update of existing policies (Niemczyk et al., 2021). The World Health Organisation published a document titled Key Messages and Actions for COVID-19 Prevention and Control in Schools, which provided the basic principles to manage the influence of COVID-19 in Education effectively (Niemczyk et al., 2021). Educational institutions were advised to update their strategic plans, cancel all mass meetings, and implement a variety of safety measures for students and staff. Following the public health guidelines and protocols to contain COVID-19 (Negi & Azeez, 2022) as well as keeping in mind the objective to complete the academic year, distance education was identified as a way forward (Niemczyk et al., 2021;Petrova, 2021). Brazil adopted various measures to slow down the spread of the virus; however, the country was recovering from the 2015/2016 recession, therefore, their response to the pandemic was limited due to the frail state of the economy even prior to the pandemic. This vulnerable state of the country was reflected in Brazil's higher Education, where the pandemic exacerbated many already-existing challenges. This included a lack of financial resources and severe inequalities among education provisioning based on geographic location, social status and access to Education (Niemczyk et al., 2021). Similarly to Brazil and the rest of the world, educational system of Russia was forced to transition to a remote form of Education. As Petrova (2021) reported, the country's higher education institutions (HEIs) found themselves in an unfamiliar situation, which involved using and applying distance learning technologies. Russian higher education system counts 966 universities, excluding branch campuses and 4.7 million students (Minaeva & Taradina, 2022). By April 2020, more than 90% of Russian universities had the capacity to offer distance learning (Minaeva & Taradina, 2022). The transformations due to the pandemic affected the entire education system at all levels, creating uncertain times, which education systems were not prepared for (Petrova, 2021). In fact, the impact of the pandemic created inequalities visible in the uneven responses by different types of institutions based on their resources and overall capacity to adapt to the crisis. The Indian higher education system is the third largest in the world after China and the USA. Bordoloi et al. (2021) reported that in 2019, there were 993 universities, 39,931 colleges and 10,725 standalone institutions across India. After considerable debates about the right response towards the pandemic, it was decided that teaching and learning will entirely shift to distance education to ensure the successful completion of the academic year (Bordoloi et al., 2021). Before the pandemic, distance education made up 11% of the total enrolment in the segment of higher Education. Therefore, the transition to distance education was familiar to some students and educators (Bordoloi et al., 2021). However, the unanticipated crisis posed unique challenges to educators and students, who were expected to adapt to digital transformation overnight. Moving to the next BRICS member state, the People's Republic of China was the first country to close schools selectively in February 2020 (Niemczyk et al., 2021). As in other contexts, the pandemic has disrupted the traditional face-to-face learning and research methods at Chinese universities (Li & Che, 2022;Negi & Azeez, 2022). The sudden shift to remote teaching and research practices influenced the academic performance and the physical and psychological well-being of Chinese students and staff (Li & Che, 2022). Furthermore, research practices required innovative approaches due to the closure of laboratories, universities as well as travel restrictions (Abbas et al., 2022). As reported by Negi and Azeez (2022), the shift to distance learning resulted in the expansion of existing inequalities posing threats to the academic progress of many Chinese students. The impact of COVID-19 on South Africa's higher education system has to be considered against very particular circumstances since violent protests had disrupted teaching and learning at several of the nation's twenty-six public universities before the outbreak of the pandemic (du Plessis, 2022; Landa et al., 2021;Motala & Menon, 2020). Students resorted to protests to express their dissatisfaction with university management, accommodations, and tuition debts, which led to the temporary suspension of classes (du Plessis, 2022; Landa et al., 2021). Therefore, the strikes already affected teaching, learning and research at some South African universities. In terms of the pandemic, HEIs were only permitted to commence reopening for the purpose of teaching and learning on 1 June 2020, following the announcement that the country would move from Alert level 5 (high restrictions) to Alert Level 3. Schalkwyk (2021) reported that under Alert Level 3, HEIs were permitted to allow 33% of students who required clinical training in all years of study to return on campus. In addition, postgraduate students who needed access to laboratories and technical equipment to undertake their research studies were also allowed to return. Moving Forward The scholarly literature revealed that the pandemic posed many challenges to higher Education in BRICS countries. As an active response, all member states transitioned to online Education, updated their policies, provided funds to address emerging challenges, developed digital programmes and transformed their teaching and research strategies according to their capacities (Niemczyk et al., 2021). As evident in the previous section, Russia and China had well-established digital infrastructure and trained educators prior to the pandemic, which they employed to continue the academic year (Petrova, 2021). Meanwhile, Brazil, India and South Africa were experiencing challenges such as a lack of training and digital infrastructure, budget cuts (Rosa et al., 2021) and protests before the emergence of the pandemic (Bordoloi et al., 2021). Therefore, although the e-provision of Education allowed institutions to move forward, it was not accessible to all students (Rosa et al., 2021). In fact, all of the BRICS countries informed that transitioning to online Education created a digital divide and exacerbated inequality in Education (Bordoloi et al., 2021;Dawood & Van Wyk, 2021;Jiang et al., 2021;Minaeva & Taradina, 2022;Naidoo, 2022;Niemczyk et al., 2021). Furthermore, all BRICS countries reported that the pandemic had impacted their students' and staff's mental health and well-being (Hedding et al., 2020;Li & Che, 2022;). Against this backdrop of lessons learnt from BRICS, recommendations towards a way forward, in post-pandemic higher Education can be made. For instance, it seems evident that the vision forward must foresee education systems to be better prepared for emergency educational situations such as the Covid-19 pandemic. The strategic plans in cases of emergency education should include transitioning to online Education, supervising and supporting students and addressing educational inequalities (Niemczyk et al., 2021). Furthermore, significant consideration must be devoted to training educators to develop the necessary knowledge and skills to effectively engage in quality online teaching and research (Niemczyk et al., 2021). Proficiency with digital technologies and maximised access to the internet are instrumental in solving the challenges posed by the pandemic and any potential future emergencies. It is clear that staff and students will be required to upgrade their teaching and learning approaches since digital competence will remain a critical skill expected from graduates. Based on recent scholarly reports, online Education constitutes the best way forward during emergency education and thus should be viewed as a COVID-keeper. However, the evident inequalities in terms of access to online Education must be acknowledged and adequately addressed. In addition, the vision forward for post-pandemic era should consider synchronising online Education with contact teaching. The need to provide for all students, irrespective of economic status, gender or location, with quality education should continue to be taken into consideration by governments and HEIs (Niemczyk et al., 2021). It is likely that higher Education in BRICS countries with the capacity and resources to exploit the benefits of online teaching and learning will do so, while the remainder will resort to traditional teaching methods (Schalkwyk, 2021). COVID-19 definitely served as a gateway, opening up multiple channels for teaching and learning in the form of online and distance methods (Schalkwyk, 2021). Looking forward, governments as well as private funding agencies should continue to assist educators and researchers to advance their efforts in undertaking their online teaching and research activities. Making informed decisions and actions is highly dependent on bringing diverse perspectives that expand the potential of effective solutions. In addition, research about creative education strategies to approach future states of emergency should be promoted. The pandemic will not last forever; however, we must learn from it to be prepared for future emergencies (Niemczyk et al., 2021). Furthermore, for better preparation in periods of emergency education, governments and HEIs in BRICS countries should set aside special funds to ensure the completion of the academic year regardless of the circumstances (Adom et al., 2020). Finally, higher Education needs to seek global cooperation and partnerships, which are prerequisites to reimagining higher Education and research in the post-pandemic era. (du Plessis, 2022). The Special Edition This special issue titled New day starts in the dark: Vision for the post-pandemic BRICS education provided a scholarly space to address contemporary educational issues and stimulate much-needed conversation about a vision for post-pandemic BRICS Education. The edition also offered space for dialogue and intellectual exchange on this imperative and timely topic. 2020 marked a global crisis impacting all social sectors, including higher education research, teaching practice, management, and most institutional operations. The pandemic exposed HEIs to many challenges pointing out vulnerabilities, highlighting social inequalities and a growing gap between the economically privileged and those who struggle to access and benefit from quality education. Beyond a doubt, Covid-related circumstances questioned the very nature of higher Education and its role within the global society. It is worth recognising that the stormy time of the pandemic also presented several opportunities to re-evaluate past practices, revisit strategic education plans, and re-envision a more inclusive future for HEIs. On the one hand, HEIs showed their capacity to change and rapidly adapt to remote teaching and research. On the other hand, many HEIs were not prepared to respond to the restrictions posed by the pandemic and struggled significantly for a long time. Although many strategies were put in place, inequalities in access to digital technologies and online learning were and are immense, which further influences social disparities in BRICS and beyond. This special edition is dedicated to critically exploring the path from the crisis to clarity and a better future of research and teaching in HEIs. It is paramount that we frame the narrative for the future with a focus on thriving instead of surviving. Embracing a positive approach, we can be reminded of Friedrich Nietzsche's statement that chaos gives birth to dancing stars. To that end, we do not wait for the crisis to fully pass and passively follow steps prescribed by others to go back to so called 'normal.' Instead, the dancing stars are found through proactivity and courage to rebuild the plane while flying and implement upgrades. Although the response of HEIs to the crisis is commendable, the need for transformations is visible, and it will only accelerate. The time is ripe to make sense of the current landscape and ongoing shifts with the intention to zoom our attention on the next steps to build back better. As evident from the collection of articles, this issue provided space for scholars from different disciplines to share how they envision the move forward based on the lessons learned and reflections made about new directions for research, teaching, and general academic practice. The authors covered several important aspects, including new models and strategies in research and teaching; comparative and international Education as a guiding discipline; accessibility, quality and equality in Education; research ethics; classroom management skills; teacher education; diversity and inclusion; virtual connectedness and collaborations; and digital literacy. As we know, social problems are territorially blind, meaning that no country has sufficient knowledge and capacity to solve challenges independently. The articles from various disciplines and contexts allow opportunities for a unified effort and collective thinking to explore the proposed theme and propose post-pandemic plans to prosper. About the Guest Editor Professor Ewelina K Niemczyk is a scholar in Comparative and International Education at North-West University, South Africa. Her research interests focus on higher education with specific attention to research capacity building, education for sustainable development and BRICS education. As a comparativist, Professor Ewelina has experiences in a variety of teaching and research positions in Canadian, South African and Polish contexts. Her scholarly interests are reinforced through the modules she teaches, the supervision of postgraduate students as well as her publications. As per her contribution to the wider scholarly community, she actively serves as a reviewer, editor, conference chair, and keynote speaker.
2023-07-12T07:00:38.501Z
2023-06-28T00:00:00.000
{ "year": 2023, "sha1": "c43b6b49500277099ff9989e540760d3a2574218", "oa_license": "CCBY", "oa_url": "https://spaceandculture.in/index.php/spaceandculture/article/download/1362/536", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2afc0ac7ba998fa6b80d0c28fd5cf09d29a75e8a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
232300708
pes2o/s2orc
v3-fos-license
Chromosomal abnormalities predisposing to infertility, testing, and management: a narrative review Background: Much interest has not been placed on the role of chromosomal abnormalities in the pathogenesis and rising prevalence of infertility in recent times. This review was conducted to renew public interest on the chromosomal basis of infertility, testing, and management. Main text: Meiotic and post-zygotic mitotic errors may cause infertility-predisposing chromosomal abnormalities, including Klinefelter syndrome, Jacob syndrome, Triple X syndrome, Turner syndrome, and Down syndrome. Chromosomal abnormalities such as deletion, translocation, duplication, inversion, and ring chromosome may also predispose to infertility. Notable features of male chromosomal infertility include spermatogenic failure, characterized by azoospermia, oligospermia, and gonadal dysgenesis, while females include premature ovarian insufficiency, amenorrhea, spontaneous abortion, and gonadal dysgenesis. The risk of these abnormalities is influenced by maternal age and environmental factors such as chemical exposure, smoking, and alcohol consumption. Most chromosomal abnormalities occur spontaneously and are not treatable. However, early prenatal screening and diagnostic tests can lessen the effects of the conditions. There is also a growing belief that certain diets and drugs capable of changing gene expressions can be formulated to neutralize the effects of chromosomal abnormalities. Conclusion: Meiotic and mitotic errors during gametogenesis and fetal development, respectively, can cause chromosomal abnormalities, which predispose to infertility. Couples who are at increased risk, particularly those with a family history of infertility and women at an advanced age (≥ 35 years), should seek medical advice before getting pregnant. Background Infertility is the failure to conceive when a couple engages in regular unprotected copulation for at least a year (Yahaya et al. 2020). At the minimum, 15% of couples worldwide experience infertility, of which males account for 20-30%, females (20-35%), and both shared the remaining (SingleCare 2020; Yahaya et al. 2020). Infertility is more prevalent in low-income nations, especially in West Africa and Southeast Asia (Elhussein et al. 2019). Infertility can be primary (also called sterility), which describes couples who have never conceived despite one year of consistent copulation (Mvuyekure et al. 2020). It can also be secondary, which refers to couples who have had at least one successful conception in the past (Mvuyekure et al. 2020). Infertility affects all aspects of life. It causes psychosocial problems such as frustration, depression, anxiety, hopelessness, and guilt (Hasanpoor-Azghdy et al. 2014). Yahaya et al. Bull Natl Res Cent (2021) 45:65 In some countries in Africa and Asia, childless people suffer discrimination, mockery, and divorce or separation (Yahaya et al. 2020). Infertile women face deprivation of financial support and basic needs such as clothes and foods by their husband (Dyer and Patel 2012). In some cases, infertility leads to polygamy or infidelity from both sides. The treatments of infertility may incur huge financial costs, resulting in economic problems, particularly in developing nations where treatment costs are often paid by the patients (Dyer and Patel 2012). Additionally, infertility may reduce the urge for success, resulting in reduced work efficiency and job loss, culminating in reduced income to cater to the family (Nahar and Richters 2011). The pathologies of infertility include endocrine dysfunction, inflammatory diseases, genital tract abnormalities, gametogenesis failures, implantation failures, and erectile or ejaculatory problems (Okutman et al. 2018). These pathologies can be triggered by lifestyles, environmental, or genetic factors. A thorough understanding of these factors may help reduce the prevalence and burden of infertility. Particularly, more understanding of the genetic factor is important because 15-30% of male infertility alone have a genetic origin (Yahaya et al. 2020). The genetic factor can be chromosomal (numerical or structural anomalies) or single-gene anomalies (Okutman et al. 2018). Chromosomal factor alone accounts for 2-14% of male infertility (Harton and Tempest 2012) and as much as 10% of female infertility (Vicdan et al. 2004). This shows that a thorough understanding of chromosomal abnormalities is imperative to reduce the burden of infertility. Although studies show that a lot of works have been done on the chromosomal basis of infertility, much attention has not been devoted to the topic in the recent past. This review was conducted to renew public interest on the chromosomal basis of infertility, testing, and management. Database searching and search strategy Notable academic repositories including Scopus, Google Scholar, and PubMed were searched individually for literature on the subject. Keywords used for searching include 'infertility, ' 'prevalence of infertility, ' 'chromosomal abnormalities, ' 'numerical chromosomal abnormalities, ' 'structural chromosomal abnormalities, ' and testing and management of chromosomal abnormalities. ' The articles collected from various repositories were merged and sorted to remove double citations. Inclusion and exclusion criteria Articles selected are those that were written in the English language and majored in chromosomal abnormality, chromosomal basis of infertility, and testing and management of chromosomal abnormalities. No restriction was placed on the year of publication of articles. However, on articles that treated the same topic with contrasting views, the most unanimous and recent information was prioritized. One hundred and ten (110) articles were collected from the databases searched, but were reduced to 92 after duplicates were removed. Of the 92 articles, 85 passed the eligibility test, of which 77 fit the objectives of the current study and were thus included. Chromosome overview Chromosomes are string-like structures in human cells (Fig. 1). Human cells usually have 23 pairs of chromosomes (46 in all) and contains between 20,000 and 25,000 genes (Genetic Alliance 2009; NHGRI 2020). One set of 23 chromosomes is maternal in origin, while the other is paternal (Genetic Alliance 2009; NHGRI 2020). Chromosomes number 1 to 22 are known as the autosomes, while the 23rd pair is called the sex chromosomes (denoted X and Y chromosome) (Genetic Alliance 2009; NHGRI 2020). Sex chromosomes determine humans' sex in which females possess two X chromosomes (XX), and males possess a X and a Y chromosome (XY) in each cell (Genetic Alliance 2009;NHGRI 2020). The genes on the chromosomes contain the information the body needs to function (Genetic Alliance 2009). Chromosomal abnormalities Chromosomal abnormalities often result from meiotic and mitotic errors (NHGRI 2020). Mitosis takes place in somatic cells and results in two daughter cells, each having 46 chromosomes like the parent cell (NHGRI 2020). Meiosis occurs in the reproductive cells (eggs and sperms) and produces four daughter cells, each having half of the chromosome number of the parent cell (NHGRI 2020). However, meiotic and mitotic errors can produce cells with abnormal copies of a chromosome (NHGRI 2020). Most often, chromosomal abnormalities occur spontaneously during meiosis, leading to abnormalities that are found in all cells of the body. However, some abnormalities may occur in somatic cells after fertilization, leading to mosaicism in which some cells express the abnormalities while other cells remain normal (NHGRI 2020). Maternal age increases the risk of chromosome aberrations (NHGRI 2020). Women inherit all the eggs they will ever produce from their mothers and so the eggs are prone to aging-induced genetic alterations (NHGRI 2020). Thus, women at an advanced age have high chance of producing babies expressing chromosomal abnormalities than young women (NHGRI 2020). On the other Page 3 of 15 Yahaya et al. Bull Natl Res Cent (2021) 45:65 hand, men produce new sperm daily, so paternal age is less likely to raise the risk of chromosome abnormalities (NHGRI 2020). Maternal and paternal environmental exposures and lifestyles may also influence the pathogenesis of chromosomal abnormalities (NHGRI 2020). There are several types of chromosomal abnormalities, which are grouped into numerical and structural chromosomal abnormalities (Genetic Alliance 2009; NHGRI 2020). Numerical abnormalities Numerical abnormalities (also known as aneuploidies) are the most common chromosome abnormalities (Gersen and Keagel 2005). Chromosomal aneuploidies are described as alterations in chromosome numbers of diploid or haploid cells (Harton and Tempest 2012). It is the presence of an unusual number of chromosomes in a cell due to an additional (termed trisomy) or lost (termed monosomy) chromosome (Genetic Alliance 2009; NHGRI 2020). Trisomy is more common than monosomy among individuals suffering from aneuploidy (Genetic Alliance 2009). Chromosomal aneuploidy is the most prevalent cause of spontaneous abortion and developmental errors in humans (Harton and Tempest 2012). Aneuploidy is predominantly maternal in origin. However, sperm aneuploidies are more common among infertile men than fertile men (Harton and Tempest 2012). There are many chromosomal abnormalities. However, the most frequent are Klinefelter syndrome, Jacob syndrome, Triple X syndrome, 45,X0/46,XY mosaicism, Turner syndrome, and Down syndrome. Klinefelter syndrome (47,XXY) Klinefelter syndrome (KS) is a chromosomal abnormality that affects males only in which the affected has two copies of the X chromosome ( Fig. 2). KS is the commonest gonosomal (sex chromosome) anomaly among men, occurring in 0.1-0.2% of newborns, and as high as 67% and 19% among azoospermic and oligospermic patients, respectively (Huynh et al. 2002;Mau-Holzmann 2005). KS is not inherited and often caused by meiotic nondisjunction or post-zygotic nondisjunction (Bonomi et al. 2017;Los and Ford 2020). Thus, KS exists in several forms, the most common of which is the acquisition of an additional copy of the X chromosome in the cells of the affected (47,XXY), occurring in over 90% of cases (Bonomi et al. 2017). An additional copy of X chromosome may also exist in some cells only and is called mosaic Klinefelter syndrome (46,XY/47,XXY), characterized by fewer symptoms (Bonomi et al. 2017). In rare cases, more than two copies of the X chromosome (e.g., 48,XXXY and 49,XXXXY) may be found in each cell, resulting in severe conditions (Bonomi et al. 2017). Fig. 1 A karyogram of human chromosome Page 4 of 15 Yahaya et al. Bull Natl Res Cent (2021) 45:65 Abnormal copies of genes on the X chromosome can disrupt male sexual development, resulting in genital abnormalities and spermatogenic failure, culminating in infertility (Los and Ford 2020). The testes of individuals expressing KS contain stem cells but degenerate too quickly (Wikström et al. 2007). So much that nothing or few cells will be left for spermatogenesis at puberty (Wikström et al. 2007). The Leydig cells of KS patients are hyperplastic and thus produce insufficient testosterone, resulting in poor libido, erectile dysfunction, and azoospermia (Nieschlag 2013;Zitzmann et al. 2004). At the minimum, 60% of pregnancy with KS result in miscarriage (Bonomi et al. 2017). KS is often accompanied by other features such as speech and learning disabilities, weak bones, enlarged breasts, epilepsy, and type 2 diabetes (Nieschlag 2013). Overall, the severity of these phenotypes correlates with the number of X chromosomes in the cells (Bonomi et al. 2017). Jacob syndrome (47,XYY) Jacob syndrome (JS) affects males only (Fig. 3). It is the second most common gonosomal abnormality after KS (Chantot-Bastaraud et al. 2008), occurring in about 1 in 1000 male newborns (Kim et al. 2013;Liu et al. 2020). Most cases of JS are not inherited. It is caused mainly by parental nondisjunction at meiosis II (before conception), leading to an additional Y chromosome (47,XYY) in all cells of the affected offspring (Kim et al. 2013;Latrech et al. 2015). Thus, males with JS have 47 chromosomes. Rarely, nondisjunction may occur from post-zygotic (after conception) mitotic errors, resulting in a mosaic karyotype (46,XY/47,XYY) in which some cells are not affected (Kim et al. 2013;Latrech et al. 2015). Some common features of JS include infertility in adulthood, behavioral and cognitive disorders, facial dysmorphia, micropenis, curved penis with non-palpable testes, and decreased total testosterone (Latrech et al. 2015;MedlinePlus 2020a). However, some men expressing JS are fertile (Kim et al. 2013). In these men, the additional Y chromosome is lost before meiosis, thus preventing infertility (Kim et al. 2013). Triple X syndrome (47,XXX) Triple X syndrome (47,XXX), otherwise called trisomy X syndrome, is a sex chromosome aneuploidy in which a female has one additional X chromosome (Fig. 4). It is the commonest female chromosomal abnormality, affecting about 1 in every 1,000 female newborns (Tartaglia et al. 2010;Rafique et al. 2019). Trisomy X syndrome is usually not inherited and results mainly from maternal nondisjunction during meiosis (Rafique et al. 2019). However, post-zygotic nondisjunction is found in almost 20% of cases (Tartaglia et al. 2010). This results in an additional X chromosome in only some cells of the affected, a phenomenon called 46,XX/47,XXX mosaicism (Medline-Plus 2020a, b, c). Women expressing Triple X are often fertile and produce babies with a normal chromosomal number, indicating that the additional X chromosome is stature, congenital urogenital anomalies, epilepsy, speech delays, cognitive and attention deficits, and mood disorders (Tartaglia et al. 2010;MedlinePlus 2020a, b, c). 45,X0/46,XY mosaicism 45,X/46,XY mosaicism, otherwise called X0/XY mosaicism and mixed gonadal dysgenesis, is a rare sex chromosome aneuploidy with a prevalence of approximately 1 in 15,000 newborns (Johansen et al. 2012). In 45,X/46,XY mosaicism, two cell lines exist, of which one has 45,X karyotype (X monosomy) and the other has a normal male karyotype (46,XY). The two cell lines are differently distributed in individuals suffering from the condition which could be responsible for the varied phenotypes expressed by the affected individuals (Rosa et al. 2014). 45,X/46,XY mosaicism is most often caused by the loss of the Y chromosome through nondisjunction in some somatic cells after normal fertilization (Telvi et al. 1999;Rosa et al. 2014). Both the 46,XY and 45,X cell lines divide nonstop, resulting in a baby with 45,X/46,XY (Johansen et al. 2012). The 45,X/46,XY karyotype can also be formed by the malformation, deletions, or translocations of Y chromosome segments (Johansen et al. 2012). This abnormality can repress the SRY genes, resulting in abnormal genitals (incomplete sexual differentiation) and testosterone levels (Johansen et al. 2012). It can also cause conditions such as azoospermia, oligospermia, sperm DNA fragmentation, and increased gonadotropins (Rosa et al. 2014;Ketheeswaran et al. 2019). In some cases, the affected show clinical signs of Turner syndrome (Efthymiadou et al. 2012). Overall, the commonest feature of 45,X/46,XY syndrome is sexual ambiguity, responsible for about 60% of cases, while the least is bilaterally descended testes, occurring in 11-12% of cases (Efthymiadou et al. 2012). However, some individuals expressing 45,X/46,XY mosaicism show normal male sexual development (Efthymiadou et al. 2012). Down syndrome Down syndrome (DS) is among the best known chromosomal disorders in humans (MacLennan 2020; NHGRI 2020). It is the commonest genetic disease, occurring in almost 1 in 400-1500 newborns (Kazemi et al. 2016;MacLennan 2020). DS, often referred to as trisomy 21, occurs by nondisjunction of chromosome 21 (in either the sperm or egg), resulting in cells with three copies of chromosome 21 (CDC 2020). Thus, the karyotype for female trisomy 21 is 47, XX, + 21, while the male is 47, XY, + 21 (Fig. 6). DS may also occur when an additional section or a full chromosome 21 is present, but bound or translocated to a different chromosome (usually chromosome 14 or 15), rather than being a separate chromosome 21 (Kazemi et al. 2016;CDC 2020). This translocation could be Robertsonian, isochromosomal, or ring chromosome (Asim et al. 2015). Because these translocations can be transmitted, this form of DS is sometimes called familial DS (Kazemi et al. 2016). The third form of DS is mosaicism, which is due to errors in cell division after fertilization (Asim et al. 2015;CDC 2020 (Shin et al. 2010). It is difficult to differentiate each form of DS without looking at the karyotypes because they all have similar physical features and behaviors (CDC 2020). However, mosaic DS may be less severe because some cells have a normal chromosome number (CDC 2020). Pregnancies at advanced maternal age (≥ 35 years) increase the risk of producing a baby with DS (Sherman et al. 2007). However, most babies showing DS are born by women less than 35 years old because younger women give more births (CDC 2020). Trisomic fetuses are at increased risk of miscarriages, defective spermatogenesis in men, and premature menopause in women (Pradhan et al. 2005;Asim et al. 2015;Parizot et al. 2019). Individuals with DS usually show moderately low levels of intelligence and speech disorders (Asim et al. 2015;CDC 2020). Structural abnormalities Structural abnormalities occur when a section of a chromosome is deleted, had an additional segment, joined another chromosome, or inverted (Genetic Alliance 2009). It results from splintering and rearrangements of chromosomal segments (Genetic Alliance 2009). The rearrangements are described as balanced if the chromosome is intact, and unbalanced if a piece of information is added or missing (Genetic Alliance 2009). Since all genetic information is retained in balanced chromosomal rearrangements, it is less likely to produce any effect (Genetic Alliance 2009; MedlinePlus 2020d). However, a disease can arise from a balanced rearrangement if the chromosomal break occurs in a gene and cause its malfunctions (Genetic Alliance 2009). A disease may also occur if chromosomal segments bind and produce a hybrid of two genes, resulting in a de novo protein that functionally harms the cell (Genetic Alliance 2009). These showed that individuals expressing balanced rearrangements have a high risk of producing unbalanced gametes, resulting in spontaneous abortion, infertility, or abnormal babies (Aubrey and Jeff 2015). Common chromosome structural abnormalities include translocations, deletions, duplications, inversions, and ring chromosomes (Genetic Alliance 2009). Chromosome translocations Chromosome translocation is a phenomenon that occurs when a segment of chromosome breaks and binds to another chromosome, resulting in an unusual rearrangement of chromosomes (EuroGentest 2007; MedlinePlus 2020d). Translocation is the most common chromosomal rearrangement (Reproductive Science Center 2020). Translocation may occur during gametogenesis due to meiotic errors, resulting in abnormalities that feature in all the cells of the baby (Aubrey and Jeff 2015). It may also result from post-zygotic mitotic errors, resulting in two cell lines in which some cells are normal and some are affected (Aubrey and Jeff 2015). Two main types of translocation exist and are reciprocal and Robertsonian translocation (Fig. 7). Reciprocal translocation is a chromosome abnormality in which two different chromosomes (non-homologous chromosomes) exchanged segments (EuroGentest 2007; Aubrey and Jeff 2015). Robertsonian translocation, also known as centric fusion, occurs when the long arm of a chromosome breaks and attached to the centromere of a non-homologous chromosome (Asim et al. 2015 or oligospermia (Dong et al. 2012). It may also cause recurrent miscarriage (Stern et al. 1999). Translocations may also occur between sex chromosomes and autosomes and have been implicated in some cases of infertility (Grzesiuk et al. 2016). X-autosome translocations impair pairing during meiotic recombination, disrupting gametogenesis, and resulting in spermatogenic failure (Grzesiuk et al. 2016). The pairing problem creates unrepaired double-strand DNA breaks, which can result in aneuploid gametes (Grzesiuk et al. 2016). In women, sex-autosome (X-autosome) translocations are rare, occurring in about 1in 30,000 newborns with variable phenotypes (Shetty et al. 2014). However, some clinical studies showed that it can cause the absence of mensuration, insufficient sex hormones, multiple congenital anomalies, and intellectual disability (Shetty et al. 2014). Chromosomal inversions Chromosome inversions are structural intra-chromosomal rearrangements, which occur when two breakpoints exist in a chromosome and the segment between the breakpoints rotates 180° before reattaching with the two broken ends (Griffiths et al. 2000;Chantot-Bastaraud et al. 2008). Inversions are the most prevalent chromosomal rearrangements after translocations (Chantot-Bastaraud et al. 2008). Two types of inversion exist, which are paracentric and pericentric (Chantot-Bastaraud et al. 2008). Paracentric inversions do not include the centromere and both breaks occur in one arm of the chromosome, while pericentric inversions include the centromere and there is a breakpoint in each arm of the chromosome (Fig. 8). Unlike deletions and duplications, genetic information is not lost or gained in inversion; it only reshuffles the genes (Griffiths et al. 2000). In addition, despite that the genes on the inverted chromosome are rearranged backward, the body is still able to read them (NHS 2020). As such, inversions often do not induce any abnormality in the affected so long the rearrangement is balanced (Griffiths et al. 2000;Chantot-Bastaraud et al. 2008). However, there is a high prevalence of abnormal chromatids in people who are heterozygous for an inversion (Chantot-Bastaraud et al. 2008). This occurs when crossing-over takes place within the inverted segment and caused unbalanced gametes, resulting in infertility (Chantot-Bastaraud et al. 2008). Furthermore, in some cases, one of the chromosome breaks may occur within a gene that performs important functions, disrupting its functions (Griffiths et al. 2000). During meiosis, inversions may force chromosomes to create inversion loops to enable homologous chromosomes to pair (Harton and Tempest 2012). The mechanisms involved and time taken to form these loops can cause infertility (Harton and Tempest 2012). Recombination is reduced in these loops, causing meiotic arrest, and resulting in cell death and low sperm count (Harton and Tempest 2012). Even if recombination takes place normally within the inversion loop, it will produce unbalanced gametes (Harton and Tempest 2012). Both paracentric and pericentric inversions also increase the risk of miscarriage due to missing or extra chromosome materials in the sperm or eggs (NHS 2020). Chromosome duplications Chromosomal duplications occur when a region of a chromosome is duplicated (Clancy and Shaw 2008). Thus, duplications result in extra genetic materials (NHGRI 2020). Duplication is termed tandem if the duplicated segment is next to the original, but non-tandem or displaced if non-duplicated regions are in-between (Fig. 9). There is also reverse duplication. Duplications affect gene Yahaya et al. Bull Natl Res Cent (2021) 45:65 dosage and thus predispose to diseases. Basically, the amount of a protein produced by a gene often depends on the number of copies of the gene, so additional copies of the genes may result in overproduction of the protein (Clancy and Shaw 2008). Embryogenesis is strictly controlled by balanced levels of proteins, so duplications that produce additional gene copies may disrupt gametogenesis and fetal development (Clancy and Shaw 2008). Chromosome deletion Chromosome deletion (Fig. 10) is an abnormality in which a portion of the chromosome is deleted (NHGRI 2020). The effects of a deletion depend on its position on the chromosome (Clancy and Shaw 2008). A deletion that involves the centromere will cause an acentric chromosome that will presumably be eliminated from the cell (Clancy and Shaw 2008). Also, the length of the deletion determines the number of genes affected, and thus the severity of the effects (Clancy and Shaw 2008). Deletions affect gene dosage and thus the phenotype (Clancy and Shaw 2008). Some genes require two copies to produce a normal expression, so if one allele is deleted (called haploinsufficiency), a mutant phenotype will result (Clancy and Shaw 2008). Chromosomal deletions affecting sex chromosomes will most likely disrupt reproductive development. Y chromosome deletion, in particular, has been implicated in male infertility, often tagged Y-chromosome infertility (MedlinePlus 2019). This condition is usually not inherited as most cases are observed in men with no family history of the disorder (MedlinePlus 2019). Y chromosome deletions cause infertility by deleting Y-linked genes in the AZF regions which are necessary for normal spermatogenesis (Heard and Turner 2011). Loss of Y-linked genes may prevent the synthesis of some proteins needed for normal sperm cell development (MedlinePlus 2019). Y chromosome deletion causes spermatogenic failure, leading to infertility (MedlinePlus 2019). The affected may show azoospermia, oligospermia, teratospermia, or sperms with abnormal motility (MedlinePlus 2019). Some men expressing mild to moderate oligospermia may sometime produce a child naturally (MedlinePlus 2019). Furthermore, the majority of men with Y chromosome infertility have some sperm cells in the testes that can be obtained to assist oligospermic to father a child (MedlinePlus 2019). However, when men with Y-chromosome infertility produce children, whether naturally or assisted, they will transmit the abnormality on the Y chromosome to all their male children (MedlinePlus 2019). Consequently, males will also express Y-chromosome infertility (MedlinePlus 2019). This form of inheritance is Y-linked and so females are not affected because they do not inherit the Y chromosome (MedlinePlus 2019). X chromosomal deletion can affect both male and female fertility. The X chromosome has many genes that are embedded in the testis and ovaries and involve in gametogenesis (Zhou et al. 2013). Male infertility is often caused by spermatogenesis disruption in which X chromosome dosage is implicated (Vockel et al. 2019). Males normally have one X chromosome whose most genes are not on the Y chromosome, so any mutational loss of function of genes on the X chromosome cannot be compensated (Vockel et al. 2019). Deletion in X chromosomes may cause defective chromosomal synapsis, meiotic arrest, and infertility (Zhou et al. 2013). In females, X chromosomal deletion may cause premature ovarian failure, gonadal dysgenesis, and infertility (Ferreira et al. 2010). Ring chromosomes A ring chromosome is a chromosome abnormality whose ends fused and formed a ring (Shchelochkov et al. 2008). All human chromosomes can form ring chromosomes (Guilherme et al. 2011). To form a ring, the two ends of the chromosome break and the broken ends fused (Fig. 11). Rarely, the telomeres of the chromosome fuse without losing any genetic information and thus produce no phenotypic effects (Shchelochkov et al. 2008). Ring chromosomes often occur spontaneously and rarely inherited due to instability of ring chromosomes during cell division and so may be lost (Yip 2015). However, if transmitted, ring chromosomes may form new rings in the offspring, which coexists with the normal cell line (Rajesh et al. 2011). This causes a mosaic karyotype in the maternal and fetal cells (Rajesh et al. 2011). Mosaicism is prevalent and influences the severity of the condition (Guilherme et al. 2011). Furthermore, during mitosis, ring chromosomes duplicate and assort regularly to the daughter cells, transmitting the rings (Rajesh et al. 2011). Ring chromosome carriers can be infertile due to Yahaya et al. Bull Natl Res Cent (2021) 45:65 changes or loss of genetic materials following ring formation (Yip 2015). In males, autosomal ring chromosomes often cause oligospermia and azoospermia, probably due to gamete instability at meiosis (Rajesh et al. 2011). Ring chromosomes rarely impair female fertility (Lazer and Friedler 2019). However, low ovarian reserve has been reported in women expressing ring chromosomes (Lazer and Friedler 2019). Testing for chromosomal abnormalities According to Winchester Hospital (2021), most chromosomal abnormalities cannot be cured. However, prenatal screening and diagnostic tests can help lessen the effects of the conditions on both the mother and baby. It can also offer the choice to terminate the pregnancy if an abnormality is detected. Screening and diagnostic tests are often done to determine the presence and risk of chromosomal abnormalities (Mater Centre for Maternal Fetal Medicine 2017). A screening test searches for signs that may indicate an embryo is at increased risk for a chromosome abnormality; it does not determine if a baby has a certain abnormality or not (Mater Centre for Maternal Fetal Medicine 2017). On the other hand, a diagnostic test confirms the presence or otherwise of certain chromosomal abnormalities. Screening tests There are three types of screening tests, which are the first trimester combined screen (FTCS), the triple test or second trimester maternal serum screen and noninvasive prenatal testing (NIPT) (Mater Centre for Maternal Fetal Medicine 2017). The FTCS involves a combination of an ultrasound scan of the fetus at 11-13 weeks gestation and a blood test of the mother at 10-13 weeks gestation (Mater Centre for Maternal Fetal Medicine 2017; Spencer et al. 2003). The test measures the concentrations of two naturally occurring hormones in the blood, including pregnancyassociated placental protein A and beta-human chorionic gonadotropin (Raising Children Network 2021). In addition to the maternal blood test and baby ultrasound, the test combines the maternal age (the age of the egg if using a donor egg), weight, ethnicity, and smoking status, to score the risk for chromosomal abnormalities (Mater Centre for Maternal Fetal Medicine 2017). The risk level is scored a figure, which is considered high when it is more than 0.0033 and low when less than 0.0033 (Mater Centre for Maternal Fetal Medicine 2017; Spencer et al. 2003). The triple test is a blood test conducted in the second trimester of pregnancy at 15-20 weeks gestation (Mater Centre for Maternal Fetal Medicine 2017). The test measures the concentrations of certain hormones (alphafetoprotein, estriol, human chorionic gonadotropin, and inhibin A) in the placenta and fetal blood to determine the risk of chromosomal abnormalities. The levels of these hormones as well as the baby's gestational age and maternal age and weight are used to determine the risk of certain chromosomal abnormalities (Raising Children Network 2021). NIPT involves examining the blood of a pregnant woman for maternal and fetal DNA fragments and can be done any time from 10 weeks gestation (Mater Centre for Maternal Fetal Medicine 2017). NIPT, also referred to as cell-free DNA (cfDNA) testing, gives more accurate results than other screening tests, but comparatively expensive (Raising Children Network 2021). The screening counts several maternal and fetal DNA fragments using massive sequencing and assigns them to chromosomes (Rizos 2018). If the result indicates high risk, invasive testing with amniocentesis or chorionic villus sampling may be offered as a diagnostic test (Mater Centre for Maternal Fetal Medicine 2017). Notably, NIPT does not determine the risk of structural abnormalities and the results do not determine a particular abnormality (Mater Centre for Maternal Fetal Medicine 2017). Diagnostic tests A diagnostic test examines the tissues of the fetus for chromosomal abnormalities. There are two methods of sampling the tissues, which are chorionic villus sampling (CVS) and amniocentesis (Raising Children Network 2021). The CVS takes the samples from the placenta, while amniocentesis takes samples from the amniotic fluid (the fluid around the baby). The tissues are then tested in the laboratory for chromosomal abnormalities by karyotyping, fluorescence in situ hybridization (FISH), or molecular karyotyping (Raising Children Network 2021). Amniocentesis Amniocentesis is often offered as an alternative to CVS after 15 weeks of pregnancy. It may also be done if CVS has been performed, but the CVS results are not clear (Raising Children Network 2021). Amniocentesis is used to take samples of the fluid that surrounds the baby in the uterus (Raising Children Network 2021). An ultrasound is used to direct a thin needle into the uterus to obtain samples of the fluid (Raising Children Network 2021). Amniocentesis has a risk of a miscarriage of less than 1 in 200 (Raising Children Network 2021). This shows that amniocentesis is less risky compared to CVS. However, like the CVS, this risk should be considered before embarking on the test. Management of chromosomal abnormalities As stated earlier, most chromosomal abnormalities are not treatable. However, some complications that result from chromosomal abnormalities can be treated to improve the quality of life of both the mother and baby (Winchester Hospital 2021). Furthermore, Cody and Hale (2015) believe that chromosomal abnormalities can be treated by changing the expression of some genes. Abnormalities caused by deletions can be treated by upregulating some genes to induce one gene to perform the work of two. Chromosomal abnormalities resulting from duplications can also be treated by knocking off some genes to normalize the expression level. The same logic can be employed for other structural and numerical abnormalities. Cody and Hale further stated that one of the several ways gene expression can be changed is through diets. The scientists explained the mechanism involved using alcohol dehydrogenases on alcohol metabolism as an example. Individuals that rarely drink alcohol easily feel drowsy after taking a shot. However, the drowsiness disappears after regularly taking the same shot for a long time. This is because repeated drinking of alcohol increases the production of important proteins (alcohol dehydrogenases) that breakdown and metabolize alcohol. Aside from diets, Cody and Hale believe that drugs can be formulated to change gene expression and their proteins. For example, a drug called statin increases the production of important proteins that help the body eliminates bad fats, thus can help treat certain disorders of lipid metabolism. Preventive measures can also go a long way in cushioning or preventing the occurrence of chromosomal abnormality. Notably, the risk of transmission of an abnormality to a baby increases as the mother ages. So, women above 35 years should see a doctor three months before conceiving a baby (Winchester Hospital 2021). Such individuals should also consider taking prenatal vitamin a day for the three months before becoming pregnant (Winchester Hospital 2021). The vitamin should have 400 µg of folic acid and should be taken through the first month of pregnancy (Winchester Hospital 2021). They should also visit their doctors regularly. Additionally, they should eat healthy foods, especially foods that have folic acids like cereals, grain products, leafy greens, oranges and orange juice, and peanuts (Winchester Hospital 2021). Such individuals should cultivate a healthy weight, avoid smoking and alcoholic drinks, and should not take any drug unless recommended by their doctors (Winchester Hospital 2021). Conclusion Several articles reviewed showed that errors during and after gametogenesis may cause infertility-predisposing chromosomal abnormalities. These abnormalities include Klinefelter syndrome, Jacob syndrome, Triple X syndrome, Turner syndrome, and Down syndrome as well as deletion, duplication, inversion, and ring chromosomes. Most often, these abnormalities are not inherited and occur spontaneously. Male chromosomal infertilities are characterized by spermatogenesis arrest, resulting in azoospermia, oligospermia, and abnormal genitals, and female is characterized by premature ovarian insufficiency, amenorrhea, miscarriage, and ambiguous genitalia. Most chromosomal infertilities are incurable. However, early testing, resulting in precautionary measures may lessen the severity of the conditions. There is also a growing belief that changing gene expression through certain diets and drugs may neutralize the effects of chromosomal abnormalities. Women at increased risk of chromosomal abnormalities such as those with advanced age (≥ 35 years) and those with a family history
2021-03-22T18:13:02.875Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "09e925ab399af017453e0e731af81c2db998d255", "oa_license": "CCBY", "oa_url": "https://bnrc.springeropen.com/track/pdf/10.1186/s42269-021-00523-z", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9d44443cc5855830c9432eb6a36906e77c1ec263", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
212904368
pes2o/s2orc
v3-fos-license
RAPD and ISSR derived SCAR marker(s) for Aphid tolerance in Brassica juncea Czern. And Coss In this present study, eight genotypes of Brassica juncea comprising four tolerant (IC-399802, IC491089, IC-312545, IC-312553) and four susceptible (IC-385686, IC-264131, IC-426392, Laxmi) genotypes for aphid tolerance were used to generate Sequence Characterized Amplified Region (SCAR) markers through dominant PCR based markers (51 RAPD and 12 ISSR markers). 13 RAPD (Random Amplified Polymorphic DNA) and 8 ISSR (Inter Simple Sequence Repeat) markers were found to be polymorphic but only 3 primers OPE16 (RAPD), UBC 839 (ISSR) and UBC 864 (ISSR) were obtained which could distinguish tolerant genotypes from susceptible ones. OPE16 (RAPD) obtained unique band of size approx. 600 bp in susceptible genotypes while it was absent in the tolerant genotypes. Similarly, UBC 839 (ISSR) yielded ~800 bp unique band in bulk tolerant while UBC 864 (ISSR) yielded three bands of ~1200 bp, ~1000 bp, and ~500 bp in tolerant genotypes which was absent in the susceptible genotypes. These unique bands were excised to generate five sets of SCAR markers. Among the five sets of SCAR marker, only BJSCAR F1 and BJSCAR R1 set yielded the promising result in all for 4 susceptible genotypes as well as bulk susceptible and was absent in all the tolerant genotypes and Brassica fruticulosa (highly tolerant to aphid, used as a control). So, this SCAR marker developed could be successfully used in screening of B.juncea genotypes in future breeding programs. Introduction Brassica juncea (L.) Czern. & Coss. Commonly known as 'Indian Mustard' is one of the highly important crop species from the family Brassicaceae. It is a Rabi season crop and highly demanded as edible oil. But, one of its major biotic constraints is mustard aphid (Lipaphis erysimi K.) which hampers its productivity. It belongs to order Homoptera and family aphidae. Both adults and nymphs stages adversely affects the plant growth and development by sucking the saps of the leaf, inflorescence and pods of the plant rendering weak and fragile plants. According to some reports, L.erysimi can cause 10-90% loss in yield in rapeseed-mustard. Though, mustard aphids can be controlled satisfactorily by insecticides, but the residual effect of the toxic chemicals hampers the environment as well as the friendly insects. So, development of resistant/tolerant varieties is the best approach to tackle the menance of aphid. Apart from development of resistant/tolerant varieties, it is further important to develop reliable screening techniques. In B. juncea, some of the morphological and biochemical traits like small and hardy inflorescence with loosely packed buds, darker leaves, more branches with wider angle of orientation, less amount of total sugar and sulphur contents, higher glucosinolates particularly sinigrin traits were observed to be related to aphid tolerance (Rai & Sehgal, Ahuja et al., Martínez-Ballesta et al.) [13,1,7] . Till date there is no report of successful tolerant cultivars developed by conventional means in B. juncea with systemic plant responses in the form of direct or indirect defenses against aphid attack. At the same time, a suitable high-throughput method for screening large numbers of genotypes yet to be developed in breeding for selection of tolerant cultivars in B. juncea. Prior reports suggest efforts were made in many crops to study the resistance to biotic stresses using molecular markers ( [10,12,9,2] . However, limited information is available in B. juncea related to development of SCAR markers for aphid resistance, which are not good enough for high level of confirmation (Chander et al.) [2] . Sequence Characterized Amplified Region (SCAR) markers are important codominant molecular markers used for tagging of a gene or to link with a specific trait. Therefore, it would be useful to develop a good SCAR marker related to aphid tolerance for screening of B. juncea genotypes. Material and Method 2.1 Plant material The plant materials included four tolerant and four susceptible genotypes of B. juncea identified on the basis of earlier field trials conducted at Oilseed section of PBG Department, BAC, Sabour. DNA extraction Total DNA was extracted from leaves collected in the field following the CTAB method described by Doyle and Doyle [5] with a few modifications using 100 mg of leaf tissue in liquid nitrogen using mortar and pestle. For bulked DNA analysis, two DNA bulks were constructed, each using the four genotypes with either tolerance or susceptibility for the aphid infestation. Later, the integrity of DNA was checked by gel electrophoresis on 0.8% Agarose-EtBr gel. Gel was viewed on a UV trans-illuminator and captured on gel documentation system (UVITEC, Cambridge, U.K.). PCR setup The PCR was carried out on a thermal cycler (Veriti R#9902, ABI, Singapore) as follows for 15 µl PCR reaction: 7.5 µl of 2X Primix Taq (Xcelris Genomics, India), 0.5µl of primer (10µM), 5µl of distilled autoclaved water was added and further 2 µl of DNA (50 ng) was added. The reaction was carried out in thermal cycler, initial denaturation at 94 0 c for 5 min, 40 cycles of denaturation at 94 0 C for 1 min, annealing at 32-49 0 C for 1 min, extension at 72 0 C for 1 min, and final extension at 72 0 C for 7 min with the list of the primers as mentioned in the table 1.0. The PCR product was resolved by gel electrophoresis on 1.5% Agarose-EtBr gel. Gel was viewed on a UV trans-illuminator and captured on gel documentation system (UVITEC, Cambridge, U.K.). Band excision and elution The six unique bands selected were cut and eluted using Quick gel extraction method as per the protocol of manufactures (Thermo-fisher, USA). Transformation in competent cells of DH5α strains of E. coli DH5α E. coli cells (NEB, U.K.) were transformed with the ligated product using the standard protocol (Sambrook et al.) [14] . After an hour these cells were spread on Luria-Bertini (LB) agar plates containing ampicillin (100 µl/ml) and X-gal (20 µl/ml). Re-streaking of colony All the white (transformed) colonies were re-streaked on a fresh ampicllin plate containing ampicillin (100 µl/ml) and Xgal (20 µl/ml) and incubated for overnight at 37 C. Confirmation of cloning of desired bands The cloning of desired bands were confirmed by double digestion with EcoRI and BamHI and colony PCR using vector specific M13 forward and reverse primers. Double digestion using restriction enzymes For the confirmation of insert restriction enzyme based double digestion was performed using following constituents for 30 µl reaction: 10 µl Plasmid, 0.5 µl restriction enzyme BamH1and EcoR1 (10U/ µl each, Thermo Scientific, USA ) each was used, 2 µl Tango buffer and 17 µl autoclaved distilled water was added to make up the volume. The setup was incubated for 1 hour and checked on 1% agarose gel and the gel was viewed on a UV trans-illuminator and captured on gel documentation system (UVITEC, Cambridge, UK). Colony PCR Colony PCR was performed to check the cloning of PCR product. Plasmid containing the ligated DNA insert was confirmed by colony PCR using vector specific primer pairs flanking the cloning site {M13 forward (F) 5'-GTAAAACGACGGCCAGTG-3' and M13 reverse (R) 5'-GGAAACAGCTATGACCATG-3'}. For colony lysate preparation (Sambrook et al.) [14] , a small portion of transformed bacterial colony was picked up with a clean micro-tip and transferred into 50.0 µl of colony lysis buffer. The micro-centrifuge tubes were incubated in a boiling water bath for 10 min and chilled on ice for 2 min. After cooling, cell debris was pelleted by centrifugation for 2 min and the supernatant (colony lysate) was transferred to a new microcentrifuge tube. PCR reaction was set-up as follows 7.5 µl of 2X Premix Taq (Xcelris Genomics, India), 0.5µl of each M13 F and M13 R primer (10µM), 5µl of distilled autoclaved water was added and further 2 µl of DNA (50 ng) was added. The PCR parameters were: 25 cycles of 94 C for 30 sec, 54 C for 40 sec, 72 C for 1-3 min and final extension at 72 C for 5 min, and cooling to 4 C. PCR amplified products were electrophoresed on 1.2 % agarose+EtBr gel for analysis. Sequencing Plasmids were sequenced at Xcelris Genomics, Ahmadabad using both M13 forward and M13 reverse primers for forward and reverse sequencing, respectively. Blast analysis The nucleotide sequence obtained was screened for vector sequence contamination initially through online VecScreen (https://www.ncbi.nlm.nih.gov/tools/vecscreen/), and then manually looked at vector's flanking sequence near multiple cloning sites (MCS). Then BLASTN analysis (http://blast.ncbi.nlm.nih.gov/Blast.cgi) was done to find if any similarity exists in GenBank database. Result DNA fingerprinting of four susceptible and four tolerant genotypes was carried out using fifty-one RAPD primers and twelve ISSR primers. Out of fifty-one RAPD primers and twelve ISSR primers, only thirteen RAPD primers and eight ISSR primers showed amplifications. Among thirteen RAPD primers and eight ISSR primers, only one RAPD primer OPE 16 and two ISSR primer UBC 839 and UBC 864 produced five unique bands which discriminated tolerant genotypes from susceptible ones. In RAPD primer OPE 16 produced one bands of size ~600bp ( Fig. 1.1.) which could discriminate tolerant genotypes from susceptible ones for aphid tolerance. Similarly, UBC 839 yielded ~800 bp unique band in bulk tolerant ( Fig.1.2.) while UBC 864 yielded three bands of ~1200 bp, ~1000 bp, and ~500 bp in tolerant genotypes ( Fig. 1.3.) respectively. These unique bands were excised and eluted from the gel. Further, ligated into TA-cloning vector (Invitrogen, Thermo-Fischer, U.S.A.) and transformed into E.coli (DH5α) cells (NEB, U.K.) following a standard transformation protocol (Sambrook et al.) [14] . Blue-white screening method was performed for the selection of white positive clones as shown in the Fig 1.4. Further the insert of the clone was confirmed by double digestion using EcoRI and Bam HI and colony PCR. The representative figure of double digestion and Colony PCR is shown in Fig. 1.5(a.) and Fig. 1.5 (b.), respectively. Sequencing and analysis of cloned RAPD/ISSR fragments The sequencing of plasmids was performed by Sanger's dideoxy method with M13F and M13 R primers for forward and reverse sequencing, respectively. These sequences were first looked for cloning site specific sequences and for primer sequences. Vector sequence contamination was also checked by VecScreen online tool(https://www.ncbi.nlm.nih.gov/tools/vecscreen/). Thus, the resulting sequences of RAPD/ISSR fragments i.e. SCAR sequences named as BJSCAR1, BJSCAR2, BJSCAR3, BJSCAR4 and BJSCAR5. These sequences were searched for similarity in GenBank nucleotide database using BLASTN programme (https://blast.ncbi.nlm.nih.gov) and BJSCAR2, BJSCAR3, BJSCAR4 and BJSCAR5 showed similarity with genomic sequence of Raphanus sativus, Brassica napus, Arabidopsis thaliana except BJSCAR 1. Studying polymorphism through developed SCAR markers The sequences obtained were used to design seven sets of SCAR primers using online tool available at http://bioinfo.ut.ee/primer3-0.4.0. Out of seven sets of SCAR primers designed from unique band sequences, BJSCAR1-F1 and BJSCAR1-R1 (Table 2.0) yielded a prominent unique bands in all the four susceptible genotypes as well as in the bulk susceptible and was absent in the tolerant genotypes under study (Fig.1.6). This primer set also did not show any amplification in B. fruticulosa, a highly tolerant to aphid (used as control), thereby confirms this SCAR primer set's discriminatory power for aphid susceptible and tolerant genotypes of B. juncea (Fig. 1.6.). Other SCAR primers (BJSCAR1-F2 and BJSCAR1-R2 for SCAR1; BJSCAR2-F1 and BJSCAR2-R1 for SCAR2; BJSCAR3-F1, BJSCAR3-R1, BJSCAR3-F2 and BJSCAR-R2 for SCAR3; BJSCAR4-F1, BJSCAR4-R1 for SCAR4; BJSCAR5-F1, BJSCAR5-R1 for SCAR5) could not show clear cut polymorphism between susceptible and tolerant genotypes (Fig. 1.7-1.10). Conversion of RAPD/ISSR-derived fragments into SCAR marker In the present study, RAPD/ISSR-derived fragments were cloned, sequenced and converted into SCAR markers namely, BJSCAR1, BJSCAR2, BJSCAR3, BJSCAR4 and BJSCAR5. BLASTN analysis of these SCAR sequences was done to see if any of these related to resistance. The sequences of BJSCAR2, BJSCAR3, BJSCAR4, BJSCAR5 showed high similarity with nucleotide sequences of A. thaliana, B. rapa, B. napus and other Brassica spp. but none specifically related to resistance. BJSCAR1 did not show any match in GenBank nucleotide database, this could be a novel sequence. There were several reports on RAPD/ISSR-derived SCAR markers, developed for polymorphism study related to biotic and abiotic stresses in crops [3,4,15] . A list of important traits linked with molecular markers in B. juncea is shown in Table 2.3. SCAE1 and SCAE2 primers were designed that discriminated heat tolerant and susceptible tomato [3] . In sugarcane, RAPD-derived SCAR marker (OPAK 12724) developed was used for screening tolerant and susceptible genotypes to drought stress [15] . Similarly, for powdery mildew in pea, a RAPD-derived SCAR marker ScOPX 04880 was developed to screen for the resistance gene 'er1' in tolerant and susceptible genotypes [16] . In B. juncea, putative source of aphid resistance reported was based on the molecular analysis of the identified tolerant accessions to mustard aphid [2] . They screened 34 germplasm with 284 RAPD primers, of which 87 primers showed amplification, and from these four were polymorphic and finally one RAPD primer could clearly discriminated the tolerant and susceptible accessions, converted SCAR marker [2] . SCAR primer developed to discriminate the aphid susceptible and tolerant genotypes of B. juncea Out of seven sets of SCAR primers obtained from SCAR marker sequences, BJSCAR1-F1 and BJSCAR1-R1 yielded a prominent unique bands in all the four susceptible genotypes as well as in the bulk susceptible and was absent in the tolerant genotypes (Fig.1.6). This primer set also did not show any amplification in B. fruticulosa, a highly tolerant to aphid (used as control), thereby confirms this SCAR primer set's discriminatory power for aphid susceptible and tolerant genotypes of mustard. Other SCAR primers (BJSCAR2-F1 and BJSCAR2-R1 for SCAR2; BJSCAR3-F1, BJSCAR3-R1 for SCAR3; BJSCAR4-F1, BJSCAR4-R1 for SCAR4; BJSCAR5-F1, BJSCAR5-R1 for SCAR5) could not show clear cut polymorphism between susceptible and tolerant genotypes (Fig. 4.7.2-Fig. 4.7.5). This indicates that probably polymorphism of RAPD/ISSR markers from which these markers were derived lost upon conversion into SCAR marker. There are reports that conversion of RAPD to SCAR markers resulted in a loss of polymorphism linked to tolerance-sensitivity to Fusicoccum in almond [8] , this loss of polymorphism has also been reported elsewhere [6,17] . Even in case of SCAR marker reported in B. juncea for aphid resistance could not provide unequivocal results since the association between marker and resistance was not always unidirectional and suggested for refinement of the marker 2 . SCAR markers also reported to yield ambiguous results or polymorphism and the reason assigned was possibility of either original RAPD polymorphisms was caused by mismatches in nucleotides in the priming sites or due to the crossing over between the gene controlling the trait and marker 11 . Hence, our emphasis should be on confirming reproducibility of results using these markers for a given trait or characteristics. SCAR marker developed in the present study could be further refined using more aphid resistant genotypes for its wider applicability in Brassica spp. Thus, in the present study, only one SCAR marker was developed and validated in different susceptible/tolerant genotypes of B. juncea. This marker distinguished susceptible and tolerant genotypes. This was also tested with B. fruiticulosa, an aphid tolerant genotypes, which further confirms its discriminatory power. We further suggests that it could be further refined using more aphid resistant genotypes for its wider applicability in Brassica spp.
2020-03-05T10:57:16.602Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "96818362aa2c5839a15c9e13a76d3c8c5e194eb4", "oa_license": null, "oa_url": "https://www.chemijournal.com/archives/2020/vol8issue1/PartAP/8-1-332-596.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea0f2056b1d0f82ed189c45b921d3c9d38689336", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
93587864
pes2o/s2orc
v3-fos-license
INFLUENCE OF VARIOUS COAGULATION FACTORS ON CHEMICAL COMPOSITION OF SERA GAINED BY CENTRIFUGATION FROM CASEIN GEL Abstract: Technological operations applied during curd processing influence syneresis and total solids content of cheese. Syneresis is not a simple physical process representing whey segregation due to curd contractions. Numerous factors can influence the process of syneresis. The aim of this work was to investigate the influence of various parameters (pH, quantity of CaCl2 added, temperature of coagulation and heat treatment) on induced syneresis. Reconstituted instant skim milk (control samples) and reconstituted instant skim milk heated at 87oC for 10 min (experimental samples) were coagulated at 30oC and 35oC, and pH of 5.8 and 6.2 with 100, 200 and 400 mg/l of CaCl2 added. According to our results, these parameters had signifficant influence on nitrogen content of serum as well as on the distribution of nitrogen matter from gel into sera. Due to the formation of coaggregates, the best rheological properties of gel were obtained for experimental samples coagulated with 400 mg/l of CaCl2 added at pH 5.8 and temperature of 35oC. According to van den Bijgaart (1989), chemical composition of milk, especially milk fat content, has direct effect on syneresis.Low casein content of milk induces low hardness curd formation and an increase of syneresis.Also, milk fat and proteins disappear to the greater extent (Pearse and Mackinlay, 1989, Pudja and Maćej, 1996).Due to the change of curd structure, total solid concentration of milk significantly reduces intensity of syneresis. Milk homogenisation causes the formation of curd with fine structure.Due to the casein adsorbtion, milk fat drops have pseudo-protein nature and induce slower whey separation (Dozet et al., 1972, Green and Grandison, 1993, Green et al., 1983, Pearse and Mackinlay, 1989). During syneresis, whey passes through the protein matrix, which can be explained by the law of Darcy.Thus Walstra (1993) reported that the gel permeability has significant influence on the intensity of syneresis.The permeability increases during coagulation.This parameter depends on several factors, such as the degree of crosslinking of curd, casein concentration and temperature.An increase of pH and concentration of casein decreases permeability and syneresis, while the increase of temperature has opposite effect (Green, 1987, Pearse and Mackinlay , 1989, Walstra, 1993, Zoon et al., 1988). Available literature contains different data about effect of CaCl 2 on syneresis.Most authors suggest that the addition of low quantity of CaCl 2 (up to 10 mM) increases syneresis, while the others suggest that the influece of this parameter is low (Walstra, 1993).Van der Waarden (according to Walstra, 1993) considered that due to the decrease of pH, the addition of CaCl 2 increases the intensity of syneresis.However, if the pH value is kept constant, the addition of CaCl 2 will reduce syneresis.In contrast, the addition of MgCl 2 significantly increases syneresis (Walstra,1993). Regarding the fact that syneresis significantly depends on β-casein content of micella, milk cooling treatment indirectly reduces syneresis of curd.To eliminate the effect of cooling, Pearse and Mackinlay (1989) suggested that milk had to be termostated for 30 minutes at 60ºC. Heat treatments usually used in cheese making have low influence on curd syneresis.However, higher heat treatments of milk cause coaggregate formation and induce low hardness and more cross-linked curd formation.This curd is characterised by low capability of contraction and slower whey separation (Green and Grandison, 1993, Pearse and Mackinlay, 1989, Pearse et al., 1985, Pearse et al., 1986, Walstra, 1993). At lower pH, the intensity of syneresis increases due to a higher capability of contraction (Green, 1987, Walstra, 1993).Lower pH affects residual quantity of rennet and Ca-ions.Thus, lower pH values influence structure of cheese.At lower pH, the dissociation of CCP-a is significantly higher as well as Ca-content of whey. Material and Method In this work the effect of pH, CaCl 2, temperature of milk coagulation and heat treatment on induced syneresis was investigated.Reconstituted instant skim milk powder was used.To avoid the effect of variation of raw milk chemical composition, this milk is usually used in standard procedure for rennet activity determination.Total solid of reconstituted skim milk powder was adjusted to 9%. Reconstituted skim milk was marked as Control sample, while reconstituted skim milk treated at 87ºC for 10 minutes was Experimental sample.We used the following coagulation conditions: -coagulation temperature (30ºC and 35ºC); -milk pH value (6.2 and 5.8); -amount of added CaCl 2 (100, 200 and 400 mg/l). Lactic acid solution (10%) was used to adjust pH, while the level of CaCl 2 was adjusted with 20 % solution of this compound. -total nitrogen content in total solid by calculating method; -distribution of nitrogen matter from gel to sera by calculating method. All experiments were repeated 6 times.Statistical analysis was performed.All data for the investigated parameters are shown as mean values.Also, the analyses of variance for all data were performed-standard deviation and coefficient of variation (Stanković et al., 1989). Total solid content of sera The results of these investigations are shown in Table 1. From the results presented in Table 1. it can be seen that the total solid content of serum separated after centrifugation at 3.000 rpm was similar, regardless of the coagulation conditions.Total solids were in the range of 6.56% (control samples coagulated at pH 6.2, temperature 35ºC and with added 100 mg/l of CaCl 2 ) to 6.28% (control samples coagulated at pH 6.2, temperature 35ºC and with added 400 mg/l CaCl 2 ).Also, the coefficient of variation was low.This indicates that the results obtained from the series varied in low range.According to these results, it can be concluded that gels obtained from both types of samples under experimental conditions were completely formed. Nitrogen content of sera The effect of different conditions of coagulation on nitrogen content of milk serum is shown in Table 2. and Fig. 1. From data shown in Table 2. and Fig. 1 it can be seen under the same coagulation conditions, serum of all control samples, except in two cases, had higher nitrogen content than experimental samples.In the first case, this is obtained for the samples coagulated at 30ºC, pH 5.8 and with added 200 mg/l CaCl 2 .The difference between these samples was 0.0015%.More exactly, if nitrogen content of experimental samples represents 100%, these samples will have by 1.09% higher nitrogen content than control ones.Also, in the case of the samples coagulated, in the presence of 400 mg/l CaCl 2 , at the same temperature (30ºC) and pH 6.2., the difference between experimental and control samples was 0.0017%.Namely, serum of experimental samples had by 1.27% nitrogen than control ones. If nitrogen content of control samples coagulated at pH 6.2, 30ºC and with added 100 mg/l CaCl 2 is designed as 100%, then nitrogen content of the serum of experimental samples coagulated at pH 6.2, 35ºC and with added 400 mg/l CaCl 2 was the minimum.Their average content was 0.1210%, and represented 81.87% of control sample marked as 100%.Slightly higher nitrogen content of the samples coagulated at pH 5.8 and at the same temperature and CaCl 2 content was observed.For these samples nitrogen content was 0.1240% and represents 85.63% of maximal values. From data shown in Table 2. and Fig. 1 it can be seen that nitrogen content of control samples coagulated at 30ºC and 35ºC increased with the increase of added CaCl 2 .Also, it is obvious that temperature caused the most intensive reduction of The increase of added CaCl 2 from 100 to 200 mg/l for control samples coagulated at pH 5.8 was similar, irrespectively of the temperature used.However, serum of samples treated with added 400 mg/l of CaCl 2 at the temperature of 35ºC had slightly higher nitrogen content than those with added 200 mg/ml of CaCl 2, treated at 30ºC.This implies that these conditions, Ca 2+ concentration has no significant influence on nitrogen content.Also, it can be seen that serum of control samples treated with added 400 mg/l of CaCl 2 and at pH 5.8 had more nitrogen than samples treated with the same quantity of CaCl 2 at pH 6.2 and both temperatures.This may be explained with higher degree of contractions and higher degree of distribution of soluble nitrogen compounds from gel to serum. Serum of the experimental samples, except for two previosly mentioned cases, had lower nitrogen content than control samples. The reasons for this are complex: -as a result of heat treatment (87ºC/10 min) chemical complex between casein and serum proteins is formed.Thus, opposite to control sample, it seems that nitrogen content of serum of experimental samples was mostly nonproteinaceous.This is in agreement with the data reported in literature.About 4-10 % of total milk nitrogen content is non-proteinaceous nitrogen (Alekseeva et al., 1986). -coagulation factors, such as relatively low pH (5.8), temperature and the quantity of CaCl 2 added (especially 200 mg/l and 400 mg/l) represent optimal conditions for preparing gels with better rheological properties.These gels are characterized by improved water holding capacity and more cross-linked structure. Nitrogen content of total solids of sera If the results of nitrogen content are expressed relative to total solids content of sera, the situation will be something different.From this point of view, the effect of heat treatment used for coaggregates formation on the higher degree of milk proteins utilization, their higher retention in gel and their lower diffusion in sera are more clearly visible.This indirectly characterizes rheological properties of gels.The results of these investigations are presented in Table 3 and Figure 2. According to our results, nitrogen content of total solids of serum obtained from control samples in all cases is higher related to the experimental samples.This confirms the fact that the higher content is the result of heat-induced coaggregates formation (87ºC/10 min).Serum of experimental samples coagulated at pH 5.8, at 35ºC and with added 400 mg/l of CaCl 2 had minimum nitrogen.Its average content was 1.91%.Coagulation at pH 6.2 and identical other conditions reduced nitrogen content to 1.93%. If the highest content of nitrogen registered in total solids (samples coagulated at pH 6.2, 30ºC and added 100 mg/l of CaCl 2 ) is expressed as 100%, then values obtained for experimental samples coagulated at pH 5.8, 30ºC and added 400 mg/l CaCl 2 will be 81.62%.On the other side, if control samples coagulated at pH 5.8 and at 35ºC with added 400 mg/l CaCl 2 marked as maximum (100%), then experimental samples treated under the same conditions will have lower nitrogen content by 9.48%.These data confirm our previous conclusion about higher degree of utilization of proteins during experimental samples preparation, mostly due to the utilisation of serum proteins. Distribution of nitrogen matter into sera The results of these investigations are presented in Table 4 and Figure 3. From these results, it could be seen that during centrifugation, under the same conditions, the higher quantity of nitrogen was distributed in serum of control samples (Table 2, Figure 1).From the results presented in Tab. 4. and Fig. 3. it can be seen that the lowest distribution of nitrogen matter from gel to sera was for experimental samples coagulated at pH 5.8, temperature of 30 o C and CaCl 2 added at concentration of 200 mg/l.The degree of distribution of nitrogen matter for experimental samples coagulated with 100 mg/l CaCl 2 added was 8.14%, while these values for samples treated with 200 mg/l and 400 mg/l of CaCl 2 added were 7.13% and 8.26%, respectively.U ovom radu je ispitivan uticaj različitih faktora: primenjenog termičkog tretmana mleka, pH, količine dodatog CaCl 2 i tempearature koagulacije na količinu izdvojenog seruma odnosno sinerezis.Rekonstituisano obrano mleko (kontrolni uzorak) i rekonstituisano obrano mleko termički tretirano na 87ºC/10 minuta (ogledni uzorak) je koagulisalo pri različitim temperaturama 30ºC i 35ºC, pH vrednostima 5.8 i 6.2, kao i pri dodatku 100, 200 i 400 mg/l CaCl 2 . T a b. 1. -Total solids content of sera obtained by gel centrifugation at centrifugal force of 3000 revolutions a minute C O N T R O L S A M P L E Total solids (%) Amount of added CaCl 2 (mg/l) T a b. 3 . -Total nitrogen content in total solids of sera gained by gel centrifugation at centrifugal force of 3000 revolutions a minuteC O N T R O L S A M P L ETotal nitrogen in total solids ( Fig. 3 . Fig. 3. -Nitrogen distribution in sera obtained by gel centrifugation at centrifugal force of 3000 revolutions a minute Nitrogen content in total solids of sera obtained by gel centrifugation at centrifugal force of 3000 revolutions a minute
2019-01-03T03:04:30.448Z
2004-01-01T00:00:00.000
{ "year": 2004, "sha1": "37aad0a34a941eac7a53209f4192caee3b9e627e", "oa_license": "CCBYSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=1450-81090402219J", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37aad0a34a941eac7a53209f4192caee3b9e627e", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
225040944
pes2o/s2orc
v3-fos-license
How Enaction and Ecological Approaches Can Contribute to Sports and Skill Learning The purpose of this paper is to explain learning in sports and physical education (PE) from the perspective of enactive and ecological psychology. The learning process is first presented from the enactive perspective, and some relevant notions such as sense-making and sensorimotor schemes are developed. Then, natural learning environments are described, and their importance in the human development process is explained. This is followed by a section devoted to the learner’s experience in which some research methods are explained, such as neurophenomenology, in addition to self-confrontation, interviews aimed at bringing out the meaning, sensations, and emotions that performers experience when they are immersed in their sport or a PE class. The sections on the ecological approach deal with the attunement, calibration, the education of intention, and the importance of representative experimental designs. The last section addresses the main similarities and differences between the two approaches. Finally, we state our theoretical position in favor of a common project that brings together the main elements of both post-cognitive approaches. INTRODUCTION There has recently been an increase in the number of papers published linking enactivism and ecological psychology. This is evidence of the scientific community's growing interest in these methodological concepts and proposals (e.g., Segundo-Ortin, 2020). This trend encourages us to envisage a confluence between these embodied approaches in today's post-cognitive era (Lobo, 2019). Classical cognitivism and information-processing theory based on a computer-based analogy of the mind, have both been strongly criticized. It currently appears very difficult to accept, given an analysis of the athletes who are active in such a rapidly changing environment as is the sporting world, the explanation that learners use representations of the world, motor programs, and rules to act (Moe, 2005;Breivik, 2007). 1 A revised paradigm of the mind based on this critique arose some years ago wherein mental processes do not occur solely in the head of the performer. The approach corresponds to the "4E" theory on cognition as embodied, embedded, enacted, and extended (Rowlands, 2010). 2 According to Gallagher (2017), these four pillars of postcognitivism have been redefined over time, and some researchers have added more "Es" (ecological, empathic, experiential) and even an "A" (4E&A, affective) to the original idea (see also Menary, 2010;Dotov, 2016;Higueras-Herbada et al., 2019). Gonzalez-Grandón and Froese (2018) have developed an excellent explanation of the four "Es, " which can be summarized as follows: embodied means that bodily structures constitute the learner's cognition, "the bodily realization of cognitive abilities as constitutive for their achievement" (p. 190); embedded means that the learner's cognition is situated in an environment and within a specific context that offers affordances by which to act; enacted means that "cognition and consciousness emerge only through the active embodied interaction, or structural coupling, of an autonomous living system with its environment" (p. 190); and extended means that "cognition is extended beyond the boundaries, thus being inherently connected with the respective physical or sociocultural environment" (p. 190). These post-cognitivist approaches to cognition have significant implications that affect our understanding of the process involved in learning sensorimotor skills (Di Paolo et al., 2017) and in sports education (Chow et al., 2016). Performers bring sensorimotor skills to bear in all kinds of learning contexts: in natural situations, such as when children play with their friends or parents in a park close to home; in more structured learning environments, such as that usually found in a PE class at school; and in individual and group sports training sessions. For Dreyfus and Dreyfus (1986), to achieve proficiency or expertise in a particular task or sport, an apprentice must undergo a long process of practice, trial, and error. The expert level is the maximum expression of learning and optimization. In their enactive proposal, Di Paolo et al. (2017) advocate an open-ended and never-ending learning process. Over time, "mastery is the ongoing process by which the agent continuously adapts to the challenges of a changing world" (p. 107). Footballers, skaters, climbers, tennis, and other sports players, constantly adjust their sensorimotor organization to all kinds of changes. These adaptations occur every second, from 1 min to the next, throughout a training session, or even from 1 week to the next. In this paper, we focus on enactivism and ecological psychology to explain skill acquisition in radical embodied cognitive science, which adopts elements of the ecological approach, dynamical systems theory, and the key notions of the enactivist movement (Chemero, 2009). This theoretical integration will later help us explain the incorporation of nonlinear pedagogy in sports teaching and PE. The primary objective of this paper is to show how these approaches can improve our understanding of some aspects of the performer's acquisition process in sport, PE, and daily activities. This paper will give a broad outline of general ideas that can guide practitioners, learners, and academics. Despite theoretical discrepancies, we are in favor of a common project that brings together both post-cognitive approaches. The first section of this paper sets out the main notions associated with enactive learning. The second section then presents the core notions of ecological psychology in skill acquisition. The third and final section addresses a number of divergences and connections between these approaches. ENACTIVISM IN SKILL ACQUISITION The enactive approach to cognition was first introduced in 1991 by Varela, Thompson, and Rosch in their seminal book The Embodied Mind. For these authors "cognition is no longer seen as problem solving on the basis of representations; instead, cognition in its most encompassing sense consists in the enactment or bringing forth of a world by a viable history of structural coupling [sic]" (p. 205). This initial research program and its founding ideas established a roadmap that researchers and thinkers from different fields of study have subsequently followed, and which is currently in full evolution. The approach has its roots in phenomenological philosophy (Gallagher and Zahavi, 2008;Gallagher, 2017), and is closely linked to the evolutionary changes that occur in humans during ontogeny, hence its relevance in explaining learning (Di Paolo, 2019). Enactivism is a non-representational approach that adopts world-involving explanations: "cognitive activity is co-constituted by agent and world" (Di Paolo et al., 2018, p. 333). Some ideas are closely related and are essential to an understanding of enactivism: autonomy, sense-making, emergence, embodiment, and experience ). An organism, and in the case that concerns us here, a learner of a certain sport is an autonomous system. According to Thompson (2007) "a distinctive feature of the enactive approach is the emphasis it gives to autonomy. In brief, an autonomous system is a self-determining system, as distinguished from a system determining from the outside, or a heteronomous system" (p. 37). For Thompson and Stapleton (2009) "the enactive approach starts from the question of how a system must be organized in order to be an autonomous system-one that generates and sustains its own activity and thereby enacts or brings forth its own cognitive domain" (pp. 23-24). Another crucial notion is sense-making, meaning "creation and appreciation of meaning, " which arises in the agent's interactions with the world (Di Paolo et al., 2010, p. 39). If we consider an agent playing a sport, "significance and valence do no pre-exist 'out there, ' but are enacted" (Thompson, 2007, p. 158), or in the words of Gallagher (2017), "the world (meaning, intentionality) is not pre-given or predefined, but is structured by cognition and action" (p. 6). The agent, through an active sensorimotor engagement with his or her activity, transforms the world "into a place of salience, meaning, and value -into an environment (Umwelt) in the proper biological sense of the term. This transformation of the world into an environment happens through the organism's sense-making activity" (Thompson and Stapleton, 2009, p. 25). "Sense-making is the interactional and relational side of autonomy. An autonomous system produces and sustains its own identity in precarious conditions and thereby establishes a perspective from which interactions with the world acquire a normative status" (p. 25). When an athlete is actively committed to his or her activity, some elements or objects become more relevant than others. According to Di Paolo et al. (2017): Sense-making does not imply sophisticated kinds of cognition, but it is implied in them. It is what is common to basic minds (Hutto and Myin, 2013) and human minds. To be clear, by "sensemaking, " then, we refer to the notion that objects or events become meaningful for an agent if they are involved in the normatively guided regulation of the agent's activity (e.g., by triggering or mediating it). This mode of relating to the world is the active making sense of a situation and the orientation of the agent toward a course of action that is adequate to it. (p. 123). How We Learn to Act and Perceive Sensorimotor learning involves changes that occur in the behavior of the agent. These changes are non-linear and are constrained by the interaction of a large number of factors, such as the learner's motivation to acquire a new skill, their sensorimotor coordination during the learning process, the number and variety of opportunities available to them to practice, and even the sociocultural environment in which they grow, develop and learn. The learner needs these possibilities. Adolph et al. (2012) showed that even the most natural or ontogenetic skills such as walking require immense amounts of practice. In a natural environment of free play, toddlers aged between 12 and 19 months of age walk an average of 2,368 steps an hour. The next milestone is to achieve sensorimotor mastery that will allow the toddler to walk very fast or run around without falling. Movements are fundamental since motor actions generate new opportunities for learning or cascades of development in domains that go beyond the mere sensorimotor (e.g., intelligence). In response to this, Adolph and Hoch (2019) highlight the embodied, embedded, enculturated, and enabling nature of human movements. Enactive learning is eminently non-representational, evolutionary, and dynamic. Learners are the product of their history of sensorimotor coupling with the environment, depending on their phylogenesis, ontogenesis, and cultural setting (Varela et al., 1991). To paraphrase Antonio Machado's famous poem, so significant to enactivists, there is no fixed path or pre-given world that guides our way forward (Thompson, 2007, p. 13). Sensorimotor couplings correspond to earlier, ongoing interactions that leave traces or create habits, the path of learning is the product of creative interactions between learners and their environment. For Hutto and Myin (2013), the arguments that explain the changes in the agent can be found in the developmental-explanatory thesis, "which holds that mentality-constituting interactions are grounded in, shaped by, and explained by nothing more, or other, than the history of an organism's previous interactions. Sentience and sapience emerge through repeated processes of organismic engagement with environmental offerings" (p. 8). From the enactive point of view, the learner's active movements have a relevant role in the emergence of cognitive and learning processes and perceptual learning (Bermejo et al., 2020). Doing and learning by doing is fundamental in couplings between the environment, the brain, and the body of the performer (Gallagher, 2017). Enactivists explain ontogenetic changes and sensorimotor development "as the growth of a network of stable patterns and the relations between them" (Di Paolo et al., 2017, p. 161). Through practice and experience, the performer will expand his or her sensorimotor repertoire and will reach a higher level of dexterity and mastery. Di Paolo et al. (2017) define it as follows: "Mastery is the ongoing process by which an agent continuously adapts to the challenges of a changing world. In our proposal, mastery consists in the refining and acquiring of new sensorimotor responses and their integration into an existing repertoire" (p. 37). Di Paolo et al. (2017) explain the process of enactive learning through a dynamical system interpretation of Piaget's theory of equilibration (p. 88). In short, learning involves going through the phases of assimilation, accommodation, and equilibration. According to these authors, the performer acquires sensorimotor schemes with practice. A sensorimotor (SM) scheme is "an organization of SM coordination patterns" (p. 90). In turn, a scheme involves a whole sequence of coordination patterns. If we think of a child who is learning to ride around a circuit on a bicycle with training wheels, the sequence would involve the following patterns: keep the handlebar in the correct position to go straight, pedal at a constant rate, look forward to know when to turn, pedal slower and move the handlebar slightly to the left to turn left, and so on. In this case, for the performer, assimilation consists of trying to maintain the stability condition even when variations arise that may affect him or her, such as performing the same circuit just after a rain shower. Accommodation is "plastic change that re-establish[es] a scheme" (p. 91), how to perform the same task, but this time without training wheels. Finally, equilibration is the last phase of an open, permanent, and endless process of learning. Here, the performer adapts to a variety of practice conditions "aimed at maximizing the stability of each scheme against violations of the transition and stability conditions resulting from environmental perturbations or internal tensions" (p. 91). Practical examples of this last phase are using lighter and heavier bicycles and riding around the same circuit but with steeper and flatter slopes, or on a variety of surfaces, such as dirt, asphalt, tile, etc. Natural Learning Environments Not all learning takes place in formal settings with purposeful teaching programs such as those implemented in schools and sports clubs. A great deal occurs naturally. Stewart (2010) emphasizes the importance to the learner's autonomy of action and the learning that takes place in or near the learner's current stage of development, and criticizes the Shanonian notion of information and instructional teaching processes: "Learning" can only be a modification of the developmental process; this means that what can be "learned" is both enabled and constrained by the epigenetic landscape. Development, and therefore learning, is essentially an endogenously self-generating process; it is, therefore, unnecessary-and impossible-to "instruct" it from the outside. This runs directly counter to the widespread notion that "learning" is a process of "instruction, " by which is meant a process of information transfer from teacher to pupil (pp. 8-9). Gallagher (2017) mentions the importance of natural pedagogy in a child's learning. The process of upbringing, non-formal teaching, and interaction with others (i.e., intersubjective education) determines the amount of attention we pay to some objects or events over others. A natural context in which enactive intelligence manifests itself is found among populations who depend on the sea for their livelihood. These groups, called Sea Nomads, include the Moken and Orant Laut of Malaysia and Indonesia and the Bajau Laut of the Philippines, Malaysia, and Brunei. These villagers spend an average of six or even 10 h a day in the water, and half of this time is spent underwater. The sea is the children's playground, and the adults' workplace (Abrahamson and Schagatay, 2014). Their enactiveaquatic intelligence is embodied and enacts with objects and situations since it allows them to adapt to the problems posed by their aquatic environment. Their coupling is such that their children can see perfectly well underwater without the help of goggles, a finding that was of great interest to researchers (Gislén and Gislén, 2004). This lifestyle has led to major adaptations similar to those found in many marine animals, namely, their diving reflex and their clarity of vision underwater. Gislén et al. (2003Gislén et al. ( , 2006 wondered whether the visual acuity of these groups was genetic or the result of an extensive history of coupling and co-dependence of these children with water. After carrying out different studies, they concluded that the superiority of these Moken or Bajú children with respect to European children was a consequence of a long evolutionary history of co-determinations with the marine environment (Gislén et al., 2003). For these researchers, spending their lives in the water has taught these children to constrict their pupils so that they can see clearly when they are submerged. Thousands of years of structural coupling with water has facilitated the development of the underwater sensorimotor skills they need to survive. When these individuals dive into the water, their intelligence extends beyond their hands and is distributed in the fishing utensils they use, and they demonstrate a refined sensorimotor skill that allows them to move freely in the marine environment, making their aquatic experiences much more than mere acts of sensorimotor coordination. The Subjective Universe of Learners and Experience The subjective universe is independent of any external analyzing and quantifying observer. No external observer can see what the learner, practitioner or team sees, feels, and lives from their point of view, with their biological idiosyncrasies, sensorimotor skills, knowledge, psychological characteristics, and their experience during the activity. Agents themselves interpret the usefulness of their actions. It is their subjective world (Umwelt) that emerges in these situations (Von Uexkül, 1951) − a world of interactions and co-determination with different levels of analysis and organizational domains, in which the specific motivations and intentions of the agents arise during the action. Von Uexkül proposed the concept of Umwelt to highlight the specific relationships between agents and their environments. They always perceive the world from their point of view. As Merleau-Ponty (1985) explains, the individual is not only a body, not only a physical structure but also a being that lives and experiences, that manifests an external and internal dimension when relating to its environment. From an enactivist perspective, learners are beings in a situation where they have an intense relationship with their environment, in which their subjective world is intensely involved, and is absorbed in their actions. Educational contexts are laden with embodied situations of acquisition and sensorimotor knowledge. Although the school system insists on eliminating the body from education, learners must be present with their body in the world. Learners are thus open to possibilities, operating bodily within, and react to the specific situations in which they find themselves. They enact to obtain the information they need from the environment and make decisions without the need for complex cognitive operations or mental representations (Button et al., 2012;Avilés et al., 2014;Davids et al., 2015). The unit of analysis is no longer the isolated individual, but rather the system made up of that individual in situ, in co-dependent and dynamic interaction with the environment in which emerging, selforganization processes occur (Varela et al., 1991). Learners regard themselves as individuals in action and in a situation, in a dialectical relationship with their surroundings and with the objects around them. One of the crucial challenges and novelties of enactivism at the methodological level is to articulate descriptions of learners' experiences using objective behavioral data. Varela (1996) called this approach neurophenomenology − the way of studying firstperson subjective experience and third-person objectification. The key is to create a fruitful circularity between phenomenology and cognitive science. One of the characteristics of this method is that both participants and investigators need to be properly trained to use it correctly. A second person, i.e., an investigator or empathic resonator, is sometimes called in to act as mediator or coach. Their role is to be aware of the signs and indicators of the study participant (i.e., the first person) in order to interpret the data they express, such as phrases, body language, and expressions, etc. (Depraz et al., 2003). The capacity to access learners' consciousness is a fundamental challenge for investigators. The objective is to bring out the significant elements of the experience when practicing or learning a sports activity. There is an important body of literature in which researchers often combine biomechanical data with that obtained from interviews with athletes and learners in PE and sports (Hauw and Durand, 2007;Bourbousson et al., 2012;Sève et al., 2013;Evin et al., 2014;Terré et al., 2016;Rochat et al., 2017Rochat et al., , 2019Hauw, 2018;Récopé et al., 2019). Although each study is unique, the general method has been to reconstruct the course of action (Theureau, 2010) to "capture" the performer's experience by verbalizing it in self-confrontation interviews. During the interviews, the researchers present the performers with videos and biomechanical data that allow them to relive the experience and more easily unleash the feelings, emotions, concerns, etc. In their study in rowing, Sève et al. (2013) analyzed athletes' subjective perception of the synchronization of their movements, which are not very noticeable externally. Reconstruction and access to the athletes' course of action allowed coaches to identify temporary movement dysfunction. It is important to detect this mismatch to readjust the biomechanics of movements in training. It is essential to immerse ourselves in or break down the moment of learning from the point of view of the learner in PE, because, as Masciotra et al. (2008) explain, learning occurs from the perspective or optic of the learner, and he or she will have significant access to new skills when this converges Frontiers in Psychology | www.frontiersin.org with emotions, feelings, beliefs, previous experiences, etc. that are brought into play in the situated action. This has two very important consequences in PE. On the one hand, it optimizes the diagnostic assessment of the student and the initial knowledge of each student. On the other hand, it establishes a reactive, empathetic, skillful, and recurrent assessment that obtains information from the student's perspective and, simultaneously, gives empathic feedback that enhances the meaningfulness of the activity experienced by the learner in the context of the PE class. The world of teacher-student interaction during PE lessons appears to be mediated by intersubjective perspectives that show that teacher empathy is very relevant to motor learning. The Role of Practitioners in Developing Skill Mastery As mentioned above in the context of learning or mastery, enactivism relies on the tools of dynamic systems theory. Like ecological psychology, both approaches view the learner as a complex adaptive system that in turn forms a system with the environment. The self-organization of learner behavior is a crucial element in the autonomy of action. Several years ago, an applied proposal called the Constraints-Led-Approach (CLA) emerged in the field of sports science and PE. This explains the emergence of a new pattern of coordination that involves the interaction between constraints associated with the task, the environment, and the learner . CLA is based on the principle that the learning process does not follow a linear trajectory and therefore results in a non-linear pedagogy (Chow et al., 2016). In this regard, for Davids et al. (2008) non-linear pedagogy is: A theoretical foundation that views learning systems as non-linear dynamical systems. It advocates that the observed properties of dynamical human movement systems form the basis of a principled pedagogical framework. In particular, non-linear pedagogy advocates the manipulations of key constraints on learners during practice (p. 224). If we accept that perception, coordination and cognition can be explained through the self-organization of behavior, we must believe that non-linear pedagogy can be related to enactivism. In an enactive pedagogy, the practitioner is always present to promote learning, but the question is, how? Although practitioners might adopt a traditional approach and teaching method, they must above all design learning environments that favor a varied landscape of affordances. In these scenarios, the practitioner's mission is to become a true "environment architect" insofar as sensorimotor exploration and autonomous discovery of solutions will be accompanied by more selective and less frequent use of verbal information . Therefore, teaching sessions involving children from 2 to 6 years of age should invite them to explore and to allow their sensorimotor behaviors to emerge spontaneously (Équipe des Conseillers Pédagogiques en EPS du Bas-Rhin, 2015). Returning to the issue of autonomy of action, a study published by Récopé et al. (2019) found that certain professional volleyball players strayed from the established game system: "It should also contribute to explain why some people (here some players) have some difficulties to follow the prescription, including the role distribution in a collective organization (here the game system)" (p. 236). To make it easier for the reader to understand the acquisition process, we will take the example of tennis. There is no doubt that an apprentice aspiring to be a good tennis player will need thousands and thousands of practice shots to reach a good level of play. However, since tennis is not a predefined world, each shot or movement is different. In tennis, one of the most interesting moments of the game is the return of serve, probably due to the receiving player's impressive responses to a ball traveling at more than 200 km/h. One of the movements that tennis players acquire after years of practice is the split-step. This is a hop-jump sequence that the receiver performs by jumping or taking off from the ground just before the server hits the ball. Until recently, tennis players were thought to have directional anticipatory behavior, that is, they were able to anticipate the server's movements and move and jump to the side where the ball would land in order to respond before the server hit the ball. However, recent research has shown that expert tennis players follow a neutral jump pattern, i.e., they do not move to either side during the split-step (cf. Avilés et al., 2019). From the enactive perspective, studying the split-step gives insight into how tennis players with different levels of expertise function. Firstly, beginners and basic level players cannot do the split-step; they must learn it naturally through sensorimotor adaptation during practice. There is no way of knowing exactly when they will learn it or when this movement will first emerge naturally during the game. Secondly, non-anticipatory neutral behavior indicates that the expert receiver creates meaning or sense-making in each serve and return sequence, and even that considering their intentions, emotions, and movements, a participatory sense-making of the interactions between both players will emerge (Di . Thirdly, several scholars criticize excessive intellectualism and argue that mental representations are not needed to act competently (Noë, 2009). In fact, in the case of a tennis ace, these images or representations could impair their performance. The body-mind unit evolves, and this is reflected in the mastery the player brings to the sport. As Varela et al. (1991) put it: "As one practices, the connection between intention and act become closer [sic], until eventually the feeling of difference between them is almost entirely gone" (p. 29). For the embodied and enactive approaches, being skilled means acting intelligently in a situation in which the individual is both situating and situated (Masciotra et al., 2008), and in which he or she establishes a dynamic and adaptable relationship. Acting skillfully means using enactive intelligence, to the extent that it activates the individual's adaptive capacity as a learner, showing control over themselves and the situations around them (Noë, 2004). In this adaptive process, cognition is distributed throughout the body, the learner does not operate outside the world, rather, it is the interaction of the athlete with his or her world that gives meaning to learning and performance. These enactions take place in natural and formal educational contexts. Returning to enactive ideas, action spaces become a network of relationships, which are embodied between the agent (practitioner and/or learner) and the environment. Training, according to Stewart et al. (2010), becomes a conscious experience of the experience of acting, where the performers as "cognitive systems are always engaged in contexts of action that require fast selection of relevant information and constant sensorimotor exchange" (p. x). Learning skills means effectively and efficiently changing the knowledge of the acting agent, changing their sensorimotor patterns, their way of acting, the meanings of their actions, and their intentions. It involves creating an enacted and embodied itinerary in which the agents progress from incompetence to expertise. In enactivism, motor learning involves refining adaptation processes through a history of structural couplings with the environment. A situation is a specific space-time that influences the way individuals act. For enactivists, the practitioner needs to create situations that favor adaptation processes in which athletes co-determine with their environment, and which facilitate the emergence of the appropriate situation-specific motor patterns. As a result of various couplings, motor skills emerge more than they are acquired (van der Kamp et al., 2019). The Fosbury flop high jump does not exist in itself, it only exists when the athlete enacts with the situation and clears the bar. As Merleau-Ponty (1985) indicates, the individual is inseparable from the environment in which he acts. Sports action exists to the extent that athletes are in a position to act, and is defined, in this case, by their motor coordination and their sensitivity to changes in the physical and material environment in which they act (McGann et al., 2013). One of the questions posed by researchers is whether mental representations are needed in these intense relationships between athletes and their sports environment − whether it is appropriate to claim the existence of such internal constructs, or whether it is direct experience, the athletes' direct contact with their environments, that drives the emergence of the sensorimotor patterns of solution. Practitioners face the challenge of designing and promoting situations that favor enaction. Understanding the situations in which learners act in PE classes involves analyzing the interactions and couplings that favor the emergence of significant sensorimotor patterns to solve the problems that arise. It is important to examine these perception-action cycles and the dynamic interactions they elicit between the brain, the body, objects, materials, people, and the context in general. In the field of sports, the questions raised for researchers are: how do athletes interact with their environments? What favors or hinders these co-determinations or couplings with the environment? Or, how do sensorimotor patterns of action emerge in these co-dependence processes? Furthermore, and along the same lines, it is important for the practitioner to calibrate the degree of variability in daily practice. Renshaw and Chow (2018) maintain that practitioners must take variability into account to promote learning: The amount of variability designed-in to a session needs to be matched to the learner. For the beginner level player, low task and environmental variability may be beneficial to guide exploration toward one or two functional solutions. In contrast, the more expert performer may be presented with greater variability in individual, task and environmental constraints to promote more adaptable behavior. Knowledge of 'critical values' (i.e., the amount of variability that will lead to instability and the search for new solutions) is important for practitioners and needs careful management and awareness of the implications for placing individuals in these critical ('red') zones (p. 12). SKILL ACQUISITION FROM THE ECOLOGICAL PERSPECTIVE Ecological psychology was first formulated by the psychologist James J. Gibson (1979Gibson ( /1986) in opposition to the (by then prevailing) cognitivist approach. The ecological approach to visual perception was originally conceived to explain how animals control movement in their environment. Subsequent researchers have made significant contributions to the development of a theory founded on motor control and learning in humans (Michaels and Carello, 1981;Turvey, 1992;Withagen, 2004;Jacobs and Michaels, 2007) and its application in sport van der Kamp et al., 2008;Fajen et al., 2009). In this section, we will try to describe the advances made in the ecological approach to sport expertise and skill acquisition. The ecological approach championed direct perception, which involves some core considerations (Gibson, 1979(Gibson, /1986). First, that information is rich enough to produce perception, and no computations or inferences (i.e., knowledge stored in the memory) are required to perceive the energy patterns in the ambient array. Second, that the available information specifies environmental properties that in turn offer invitations to act. Affordances can be conceived as the opportunities for action that the athlete perceives from informational variables emerging from the environmental specifications. Third, perception and action are coupled processes that mutually influence each other. How an athlete becomes an expert, or how a learner is capable of acquiring new skills, has been explained by three interconnected stages: education of attention, calibration, and education of intention (Jacobs and Michaels, 2007). Education of Attention or Attunement When the perception of a property involves one-to-one mapping with respect to environmental energy patterns (1:1), then that informational variable is specific to that property. For instance, the variable tau (τ) under certain circumstances signifies the ratio expansion of an incoming object (e.g., a ball in a penalty kick), which in turn specifies the time-to-contact (Savelsbergh et al., 1991). However, performers detect (and use) some informational variables that may not perfectly correlate with environmental properties but can still be useful. These are the non-specifying variables (Withagen and Chemero, 2009). As one might observe, the usefulness of variables to accurately control movement depends on the reliability of the information (degree of specificity of the variable). For example, during a penalty kick, the goalkeeper may rely on non-specific variables, such as the penalty-taker's direction of gaze during the run-up (which does not necessarily match the direction of the kick; Wood and Wilson, 2010), or they might base their dive on other, more reliable, variables observed closer to ball contact, such as the orientation of the non-kicking foot (Lopes et al., 2014). In penalty-saving, informational variables that unequivocally determine the trajectory and the speed of the kick are extracted from the ball's flight. The education of attention is the convergence from the least to the most (1:1) specifying variables. With practice, performers learn to rely on more useful variables to control a particular action. A recent study applied this theoretical concept to penalty kicks, with promising results . Goalkeepers improved their rate of successful saves after on-field training. During training sessions, they were forced to pick up information closer to the point of contact with the ball by placing three potential penalty-takers who simultaneously started the runup to the ball, but only one kicked the ball. In other words, the goalkeeper learned to become attuned to more specific informational variables during the penalty kick. Calibration There is an ample body of research (cf. van Andel et al., 2017) showing that the perception of opportunities for action (affordances) are scaled to the performer's ability, such as their size (e.g., Warren, 1984) and personal capabilities in terms of action (e.g., Dicks et al., 2010b). Fajen's affordance-based control approach establishes that an athlete's action capabilities regulate their own and other's (opponents) affordance perception, creating a boundary between achievable and non-achievable actions (Fajen, 2005;Fajen et al., 2009). In other words, successful control of movement is predicated on the basis of the relationship between the maximum capabilities of action and the space-time constraints of the task. Continuing with the penalty kick example, one goalkeeper may be attuned to the most specific information about the future location of the ball: the trajectory of the ball during the first moments of flight. However, that information is typically picked up too late, leaving the goalkeeper insufficient time to complete the dive with guarantees (i.e., arrive at the right time). Therefore, if goalkeepers do not calibrate their agility to the expected demands of the situation by waiting for the most reliable information, they will move to the same side as the ball (the right place), but too late to intercept the ball (Navia et al., 2017). Hence, the space-time constraints of the task (speed of the ball, distance traveled) would need to be calibrated for maximum agility (speed of movement) if they are to achieve their objective of stopping the ball (see a detailed model of this in van der Kamp et al., 2018). Studies suggest that goalkeepers scale their timing of the save to their capabilities (Dicks et al., 2010b); more agile goalkeepers start the saving action later (closer to ball contact) and less agile ones dive earlier. Interestingly, more agile goalkeepers were found to save more penalties (Dicks et al., 2010b). Education of Intention Education of intention is defined as the selection of affordances that guide behaviors. In other words, it is about decisionmaking during an action. The selection of action is related to the perception of affordances, which in turn depends on scaling actions (calibration). For example, there are some basic scenarios in which the athlete's decision-making takes into account lateral movements (e.g., penalty kick, return of tennis serve). In the case of a penalty kick, the control of the action -where to dive and how to time the dive -would be primarily predicated on the affordance-based control of that interacting situation (see van der Kamp et al., 2018). In Dicks et al. (2010b), more agile goalkeepers who initiated the saving action later saved more penalties than their slower counterparts. The authors suggest that waiting longer allowed goalkeepers to pick up more reliable information and control their actions based on more specifying variables (Lopes et al., 2014). Similar findings have been reported in tennis (Triolet et al., 2013) when, under more lenient space-time constraints, players waited longer, which in turn allowed them to base their action on more reliable information (see also Navia et al., 2018). However, there are multiple situations where different affordances can be used to guide actions (e.g., imagine a football midfielder just after receiving the ball). Here, ecological dynamics provides a theoretical framework for explaining behavior trends (Araújo et al., 2018). On the premise that behavior emerges from the interaction between the athlete's characteristics (abilities) and the space-time constraints of the environment, affordance selection is understood as the shift from action modes (e.g., moving forward with the ball, dribbling, passing to a teammate, etc.). These changes between modes of action fulfill a functional criterion. Athletes follow a particular (and stable) action mode until the instability of emerging agent-environment constraints compels them to shift toward another mode during the action. Underlying agent-environment system factors such as the distance between encounters (Esteves et al., 2011;Vilar et al., 2012) have been found to shape changes of action modes and successful performance in sports such as basketball (Esteves et al., 2011), futsal (Vilar et al., 2012), boxing , etc. With practice, performers learn how to become attuned and calibrated to the landscape of affordances to maximize action selection and transition from one action mode to another (Araújo et al., 2019). In this regard, Rietveld and Kiverstein (2014) argue that an animal in a particular life form perceives affordances in relation to motor abilities. With practice and experience, as the performer becomes more skillful his or her landscape of affordances becomes richer and more varied. Therefore, two performers at different sporting levels who share the same sociocultural practice could have more relevant or less relevant affordances. This means that for an athlete, affordances are modified and actualized in accordance with the learning process. In the words of Heras-Escribano (2019a): "the action-perception loop changes and it allows us to open new possibilities that were not present before" (p. 87). Despite these recent contributions, this aspect is still the least developed area within direct learning (Jacobs and Michaels, 2007). In particular, how information from different sources interacts and is integrated remains unsolved. In this regard, the probabilistic functionalism derived from the Brunswik lens model may provide a promising sports framework to further explore the interaction between imperfect information coming from different time-scales: proximal vs. distal (Pinder et al., 2013). For instance, in the football penalty kick, goalkeepers in experiments modified their behavior (timing and side success) whenever situational information concerning the kicker's preferences was conveyed (Navia et al., 2013). Similarly, goalkeepers show an unusual tendency to dive to either side, regardless of the kicker's kinematics or record, due to possible negative social judgments if they do not (Bar-Eli et al., 2007). Representative Experimental Designs A core concern in ecological psychology has been the degree of fidelity between behavioral agent-environment system properties and the experimental settings used to test expert performance and perceptual (motor) learning. Since Araújo et al. (2007) reintroduced the original Brunswik notion of representative design (Brunswik, 1956), sports scientists have been concerned about the extent to which experimental conditions influence the perception of affordances and the regulation of actions. Accordingly, there has been growing opposition to some methods used to capture the expertise of athletes (e.g., verbal reports, occlusion techniques, video-based training, etc.) that could hamper perception-action coupling at both the basic, neuropsychological (van der Kamp et al., 2008;Mann et al., 2013) level and the applied behavioral level (Travassos et al., 2013). In other words, if ecological psychologists hold that performed actions change the way the athlete perceives the world, and affordance perception changes the regulation of actions, then separating or altering the natural coupling between perception and action would significantly distort the picture of perceptual attunement, calibration and affordance selection. For example, in the football penalty kick, findings suggest that differences in information pick up among goalkeepers occur as a function of the type of response required (i.e., joystick or verbal vs. actual save) (Dicks et al., 2010a). Therefore, the representativeness of task design in experimental settings should be assessed and ideally be preserved at the highest level to truly recreate the athlete's skill performance in a competitive context (Avilés et al., 2019) or actual learning conditions (Pinder et al., 2011). DIFFERENCES AND SIMILARITIES BETWEEN APPROACHES Discussing skill acquisition from an enactive and ecological perspective, this paper will now present some divergent points that make it difficult for these approaches to converge or work together. Heft (2020) recently analyzed the most relevant books and articles in this regard and identifies three main discrepancies between direct realism and enactivism: sensations, the concept of information, and an organism's boundaries (see Heft, 2020 for a review). Cummins (2020), meanwhile, also believes convergence to be difficult, and criticizes ecological psychology, saying: "on the experience side of the account, it has nothing to say about phenomenology, experience, emotions, or feelings" (p. 9). Moreover, Cummins says: The ecological analysis starts by singling out a "behavior" to be characterized, by fixing the organism/animal/agent and the environment of relevance, and it builds its account from there. In so doing, it frequently has the result that much of the explanatory load normally consigned to hidden interiorities and brains is reduced, but not removed. (p. 10) Since the publication of The embodied mind in 1991, the founders of enactivism have criticized Gibson's approach. Almost 30 years ago, Varela et al. (1991) explicitly expressed their disagreement, especially with "the act of perceiving being direct" (p. 204). For ecological psychology, information is there for the agent to perceive it directly, and this information affects the perceptual process that guides their actions. For enactivists, these invitations to act (affordances) cannot be captured directly by the agent − they can only be detected or rather enacted in a co-determination relationship (Scarinzi, 2011). The concept of functional tonality has many similarities with the previously mentioned equimentality of Heidegger (2003) or Gibson's affordance Gibson (1979/1986 something that is perceived and interpreted by performers and practitioners. Coaches, in their desire to teach and correct skills, situate their subjective universe in relation to the subjective worlds of their athletes. This embodied orientation is reflected in the current understanding of motor learning in school, where the feelings, emotions, and perception of how learners live and feel teaching, has led to the emergence of new 21st century PE teaching methods, focusing on how to teach learners to understand how they learn (Moy et al., 2019) in an environment full of meaning, mediated by a universe of dozens of students. Another relevant critique of the ecological approach has been the role played by the movement of the perceiver. Enactivists argue that in the Gibsonian approach, agents and performers play a passive role when perceiving, that is, that the act of perceiving was passive rather than active (Varela et al., 1991). We believe that Gibson's founding idea Gibson (1979Gibson ( /1986 has always been the idea of seeing the agent as an active explorer, and we, therefore, disagree with this enactivist critique. In this regard, Gibson (1969) mentioned the following about perceptive learning: "It is not a passive absorption, but an active process, in the sense of exploring and searching, for perception itself is active" (p. 4). Three decades later, Gibson and Pick (2000) added: "Information about properties and especially about what they afford is actively obtained by exploring, and after a few more months by actively using objects. Exploring objects and discovering how they can be used is the way meanings are learned" (p. 86). It is also true that the experimental paradigms used by both approaches can induce certain differences. Specifically, many enactivist studies have used sensory substitution or deprivation, which compels participants to perform many active movements to perceive, and this, logically, leads to a sense-making that demands a true commitment from the learner (see Bermejo et al., 2020). However, if we consider ecological sports studies, where the performer has access to information from all their sensory modalities, perception can be rapid and active without the need for many repeated or constant movements to elicit meaning. For example, in a football penalty kick, the goalkeeper moves but only has a few milliseconds to perceive the direction of the ball. An important limitation for the convergence of both post-cognitive approaches is the use of different concepts and vocabulary. This is a problem for researchers who are compelled to use enactive and neo-Gibsonian terms that generate different interpretations. A common language would facilitate understanding among practitioners such as PE teachers and coaches and help them design practice sessions that encourage emergence or self-organization. Heras-Escribano (2019b) argues that some enactivists use the term affordances very lightly, with no regard for the ontological and epistemic consequences (p. 207). On the other hand, it is interesting to note how the term affordances has different interpretations in ecological psychology (e.g., Chemero, 2009;Withagen et al., 2012;Rietveld and Kiverstein, 2014). Despite apparently studying the same phenomenon, when we look more closely, we realize that both approaches explain the relationship between the performer and the environment differently. From an ecological point of view, the environment is more objective; enactivists, however, give more importance to the history and lived experience of the agent. As McGann (2016) points out: "it is also important to note that experience is continually sensitive to its own history" (p. 313). Enactivism explores this subjective dimension of actions, motivations, needs, and impulses that compel performers to commit to their environment (Di Paolo, 2005). Each person perceives this enactive relationship according to their personal characteristics, their goals, and their life experience. And it is in this area of this subjectivity where objects and situations make sense to the performers, where they acquire a purpose and utility, and where they have a functional tonality, defined by what can be done with them, by their usefulness. This is an aspect that ecological approaches based on Gibson's theories reject, despite the numerous coincidences that exist between the two paradigms, since although ecological approaches sometimes use the term enaction, they do so as a way of highlighting the active nature of perception, focusing attention on the environment (Stoffregen et al., 2006). For enactivists, the structural coupling between agents and their environment (couplings) must be regarded as sensorimotor patterns that enable them to carry out actions guided by their perceptions (Scarinzi, 2011). This is why analyzing sports performance and experience from a solely third-person perspective only allows us to capture this action dynamic in an equation of movement and modeled behavior but does not allow us to understand how the agent experiences this environment as the one who acts. Hence, the enaction paradigm offers the possibility of articulating the different levels and domains of organization involved in sports action. As Krein and Ilundáin-Agurruza (2017) show, sport highlights the value of enactivism and extend its range of application beyond the simple minds that are usually analyzed by researchers. Within a broader, conceptual, embodied cognition, Beilock (2008) argues that experience (i.e., practice) in/of a particular action modifies the extent to which cognition is grounded in action. Thus, embodied cognition establishes that our experience in performing a particular action helps us to predict the other's actions in terms of what and how they will act (a similar concept to social affordances; Fajen et al., 2009). Thus, motor experience accumulated by a skilled athlete would maximize their ability to predict and accurately assess the other's actions (Cañal-Bruland et al., 2010). However, the most prolific approach to sport is the embodied perception theory (Proffitt, 2006;Witt, 2017). This embodied viewpoint postulates that perception of the environment (e.g., ball speed) is not (solely) determined by the physical properties of the environment, but rather reflects our ability to interact with objects (Proffitt, 2006). Although this assertion may seem unaligned with the foundations of ecological psychology, embodied perception can be conceived as an extension of the concept of affordance (Gray, 2014). Hence, the properties of the environment are perceived in terms of the agent's action capabilities (Witt and Riley, 2014;Witt et al., 2016), and within a rational boundary of possible actions to be carried out (Lessard et al., 2009), an idea which has similarities with the aforementioned affordance-based control approach (Fajen et al., 2009). There is extensive empirical evidence of how specific perception affects the function, the relative difficulty of a task (i.e., skill expertise, concomitant success, objective task difficulty), and its final goal (overviews on Gray, 2014;Witt et al., 2016). For example, the cup was perceived to be larger by golfers who were more skilled, and when putting from nearby (Witt et al., 2008). Similar effects have been found in softball and baseball, where players with a better performance history perceived the ball to be larger (Witt and Proffitt, 2005;Gray, 2013), or when the stroke was more difficult to execute (Gray, 2013). In the same baseball study, the speed of the incoming ball was perceived as being slower by players with a better batting average (Gray, 2013). This action-specific perception of speed is claimed to be supported by an underlying perceptual-motor information process similar to calibration. By way of example, if the absolute value of ball speed is scaled to the individual agility of goalkeepers, then goalkeepers with faster lateral movement would perceive the ball as slower (Gray, 2014). With respect to the objective of the task, findings suggest that different perceptions of ball size are a function of the intended objective of the action. For example, Batters perceived a ball to be larger when the task constraints were aligned with the goal action and vice versa (Gray, 2013). Other authors have observed a correlation between hitting performance and estimated target size, but the effect disappeared when the goal of the action changed from just hitting to hitting and catching the launched ball (Cañal-Bruland and van der Kamp, 2009). Gray (2014) suggests that these differential effects of ball size perception, as a function of the goal of the intended action, may be used to shape affordance selection (e.g., altering the task constraints of the batter's training -ball size or trajectory-to perfect a specific stroke). In recent years, researchers have made theoretical attempts to bridge the gap between enactivism and ecological psychology (Chemero, 2009;Stapleton, 2016;Baggs and Chemero, 2018). The rapprochement between these theoretical perceptions is reflected in the transversal use of some concepts, such as affordances and agency. For example, ecological psychologists and enactivists use the term affordances to refer to the values or meanings of things (Thompson, 2007). Gibson himself said: "I have coined this word as a substitute for values, a term which carries an old burden of philosophical meaning" Gibson (1966Gibson ( /1968. Even in the enactivist framework, the original concept has been changed to a broader notion called affordance spaces (Gallagher, 2017, p. 174). According to Travieso et al. (2020), if we are to bring enactivism closer to ecological psychology it is essential to distinguish between perceiving and actualizing affordances. These authors also comment on the relationship between affordances and sensemaking, outlining that this is "because affordances are related to the bringing-forth-the-world concept of enactivism and sensemaking" (p. 7). Higueras-Herbada et al. (2019) claim that direct learning theory should be included among the post-cognitivist theories of learning, as it shares the basic commitments of embodied, embedded, enacted, and extended. According to Heras-Escribano (2019a), enactivism and ecological psychology can be combined in a single post-cognitivist research framework, providing we assume the interaction between the organic agent and the environment on two different levels of understanding. The subpersonal levels involve the neural dynamics of the sensorimotor contingencies and the emergence of enactive agency, and the personal level deals with the dynamics that emerge from the organism-environment interaction in ecological terms. It seems, then, that sensorimotor abilities and the study of affordances have much more in common than their proponents realize (Chemero, 2009, p. 154). In a more sports-related framework, the enactivism of Krein and Ilundáin-Agurruza maintains that high cognitive non-representational states during a high performance (e.g., climbing without ropes) can be possible through flow and mushin (i.e., mindfulness fluid awareness). The athlete is holistically attuned to the environment on multiple levels of engagement: intellectual, emotional, volitional, kinetic, and other capabilities (Krein and Ilundáin-Agurruza, 2017). Climbing is a very comprehensive sport that develops different skills in PE classes. It is interesting because the learner creates an intense relationship through a personal commitment to the wall (Terré et al., 2016). None of the learners will live the same experience. Each student must discover creative solutions that emerge moment by moment, in their constant interaction with the wall. The best hand and foot holds are not prepared in advance, but will rather, be the result of their sensorimotor enactment. CONCLUSION In this article, we have shown the enactive and ecological notions used to explain learning in sport, PE, and daily living activities. Sports and motor skills in general are excellent settings for investigating learner cognition. Reaching a certain level of learning or mastery requires practice, and each learner experiences the learning process in a way that is unique and individual to them. We have seen how for several years, and despite having several ideas in common, enactivists and ecological psychologists have seemed to be working separately, and sometimes these ideas are at odds. However, there are clear signs of the potentials for combining the two approaches. In 2020, the beginning of a new decade, we are curious to see how events will unfold. Enactivists and neo-Gibsonians may one day no longer regard each other with suspicion, and instead join forces in a joint language, forming an enactive-ecological program or an ecological-enactive approach (cf. Baggs and Chemero, 2018;Heras-Escribano, 2019a). This would certainly allow them to broaden explanations and in our case, to better understand, how human beings function when they learn or perfect a skill. In line with the conclusions of Segundo-Ortin (2020), we believe that enactivism (Di Paolo et al., 2017;Di Paolo, 2019) can make an important contribution to understanding learning and showing how the performer acquires and optimizes his or her sensorimotor skills. AUTHOR CONTRIBUTIONS CA, JN, L-MR-P, and JZ-A have equally contributed to the structure, theoretical position, and writing of the manuscript. All authors contributed to manuscript revision, and read and approved the final version.
2020-10-23T13:14:18.329Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "66ee2521fc71928ce40ab7f3db19107724d59caa", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.523691/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66ee2521fc71928ce40ab7f3db19107724d59caa", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
158888120
pes2o/s2orc
v3-fos-license
Trade Unions and Labour Welfare Measures – A Study on PSUs in Kerala Labour welfare was the setting up of minimum desirable standards that enable the worker and his family to lead a good working life, family life and social status. For safeguarding the interest of the labourers trade unions play an integral part, irrespective of whether it was a big organization or small organization or public sector organization or private sector organization. Present study was an attempt to identify the labour welfare measures of the Public Sector Undertakings in Kerala and the role of trade union in labour welfare measures. The study reveals that trade unions activities were limited only to conduct strikes for increasing wages. They were not involving in labour welfare activities for the labourers. The study suggests that the trade unions should also participate in the labour welfare activities of the organization which was the most important factor for gaining the confidence of the workers. in their article titled "Railway employees perception towards working condition and role performed by trade unions: A study on Badarpur Sub-division of N.F. Railway" attempts to measure the satisfaction of the railway employees about the role performed by the trade unions and the other activities of the trade union related to the upgradation of the railway employees in the Badarpur sub division of N.F Railway. The study revealed that the trade unions were not only playing an active role in improving the quality of work life of employee, but also in maintaining good industrial relations in the organisation. Infact, the welfare of the employees seems to be an inseparable component of the functions of these trade unions. Donado, A., & Wolde, K. (2012), in their article titled "How trade unions increase welfare" attempts to study the role of trade unions in increasing output and welfare of economy. Historically worker movements have played a crucial role in making workplace safer. Firms traditionally oppose better health standards. According to their interpretation, workplace safety was costly for firms but increases the average health of the workers and thereby the aggregate labour supply. Worker movements and trade unions provide a means to exchange information among workers about health implications of hazardous jobs. By pooling this information and setting labour standard accordingly, trade unions increase output and welfare of an economy. Sarma, A. M. (2011) identified the primary function of a trade union as to protect the interest of its members by involving in welfare activities like organizing mutual benefit societies, co-operatives, unemployment assistance, libraries, games and cultural programmes. Education of its members in all aspect of their working life , including the improvement of their civic environment will be another function. White., M. (2005), studied the impact of unions on management, practices adopted to reduce labour costs, need for implementing high performance work systems and employee welfare provisions. The study reveals that, comparing to organisations without trade unions; organisations with trade unions were found to have practices which were consistent with "mutual gains" outcomes. Gosh, P., Nandan, S., & Gupta, A. (2009) focuses their study on plant level trade unions of PSUs of India, aims to capture the changing roles of trade unions from maintaining good industrial relations to improving the quality of life of workers. The study suggests that if a union is actively involved in labour welfare of the workers, then the workers may be motivated to remain attached to it, rather than joining another union. OBJECTIVES: 1. To identify the statutory and non-statutory labour welfare measures of the organization 2. To study the role of trade union in labour welfare measures SCOPE: The study covers all the labourers (9879 labourers) in the manufacturing sector (35 units) of PSUs in Trivandrum. METHODOLOGY: The method used for the study was both analytical and descriptive in nature. Both the primary and the secondary data had been used for the study. Primary data had been collected from 60 labourers of Public Sector Undertakings. By applying stratified random sampling method, 35 units has been classified into 3 strata on the basis of number of labourers working in each PSUs namely small scale (below 100 labourers), medium scale (between 100 and 500 labourers) and large scale (above 500 labourers). From each strata 20 labourers has been selected using convenience sampling method. Secondary data had been collected from various sources such as thesis, journals and reports. ANALYSIS AND INTERPRETATION: The labour welfare measures provided to the PSUs in Kerala can be classifies into twostatutory and non statutory labour welfare measures. Statutory welfare measures consist of those which were provided under the different labour legislation. All the employers in India were statutorily required to provide these measures. Nonstatutory welfare measures include those which were provided by the employers and workers' organization apart from the statutorily welfare measures. The attempt to identify the statutory labour welfare measure provided to the labourers of the organization was depicted in the The above analysis reveals that majority of the labourers in the PSUs were provided statutory welfare measures such as uniform storing facilities (70%), sitting facilities (90%), drinking water facilities (100%), First aid appliances (75%), Shelters, lunch room and rest room facilities (65%) and Canteen (60%) facility. But majority of the labourers were not provided with welfare officers services (65%). None of the labourers were provided with uniform washing facility and crèche facilities in the organization. It was evident from the above analysis that majority of the labourers were provided with a number of statutory welfare measures such as educational facilities (48%), medical facilities(42%), housing schemes (45%), proper lighting (39%), Latrines and urinals(60%), washing allowances (39%) and uniform allowances(36%). However, transportation (6%), recreational (24%) and counselling services (21%) were not provided to them (majority). Trade unions were the voluntary organization of workers formed mainly to promote, protect and improve through collective action, the social, economic and political interest of its members. Seeking a healthy and safe working environment was also a prominent feature of union activity. Analysis (5-point scale) on the study about the role of trade union in labour welfare measures was shown in the following table. The above analysis reveals that 75% (majority) of the respondents were of the opinion that trade unions helps to secure better wages for the labourers. However a minor percentage (25%) disagree with the same. 54% (majority) of the respondents opined that trade unions were not participating in the labour welfare activities of the organization. But a minor percentage (10%) shares a different view. That was they opined that trade unions were participating in the welfare activities of the organization. 90% of the respondents opined that trade unions were not running any welfare institutions. However 80% of the respondents were of the opinion that trade unions were helping to redress their grievances. A majority of the respondents (60%) feel that trade unions were not sincere towards the labourers of the organization. 57 respondents (95%) finds that trade unions were supporting the production programmes of the organization. 60% of the respondents feel that the trade unions were conducting the labour strikes for protecting the interest of the labourers. FINDINGS AND SUGGESTIONS: The labourers in the PSUs were provided statutory welfare measures such as uniform storing facilities, sitting facilities, drinking water facilities, first aid appliances, Shelters, lunch room and rest room facilities and Canteen facilities. But majority of the labourers were not provided with welfare officers services. None of the labourers were provided with uniform washing facility and crèche facilities in the organization. In the case of non statutory welfare measures, the labourers were provided with a number of statutory welfare measures such as educational facilities, medical facilities, housing schemes, proper lighting, Latrines and urinals, washing allowances and uniform allowances. But transportation facilities, recreational facilities and counselling services were not provided to them. An analysis on the role of trade union on the labour welfare measures reveals that trade unions neither participating in the labour welfare activities of the organization nor they were running any welfare institutions. However trade unions were helping to redress their grievances. A majority of the respondents feel that trade unions were not sincere towards the labourers of the organization. But trade unions were supporting the production programmes of the organization. Respondents feel that the trade unions were conducting the labour strikes for protecting the interest of the labourers. PSUs in Kerala failed to provide all the statutory welfare measures as per the Factories Act 1948. Government of Kerala should take initiative to provide uniform washing facility and crèche facility to the labourers of the organization. Trade unions were not participating actively for labour welfare activities of the organization. The were only taking effort to secure better wages to the workers. Trade unions should also participate in the labour welfare activities of the organization and should stand for the labourers for protecting their interest. CONCLUSION: Labour welfare measures help the workers and his family to lead a good working life, family life and social status. For safeguarding the interest of the labourers trade unions should play an important rule as trade unions were one of the agency who provide labour welfare measures to the workers of the organization. Present study was an attempt to identify the labour welfare measures of the Public Sector Undertakings in Kerala and the role of trade union in labour welfare measures. The study reveals that trade unions activities were limited only to conduct strikes for increasing wages. They were not involving in labour welfare activities for the labourers. The study suggests that the trade unions should also participate in the labour welfare activities of the organization which was the most important factor for gaining the confidence of the workers. SCOPE FOR FUTURE STUDY: A comparative study on Trade unions of central and state PSUs would help to identify the areas where the trade unions can extend their activity towards labour welfare measures. In Kerala, it was observed that unions were more involved in politics than on concentrating on the welfare of the workers. The leaders and the office bearers were more focused on fulfilling their personal interests without paying any attention to the hard-working labourers. A similar study in PSU's in India could help to gain critical insight in these issues.
2019-05-20T13:05:37.318Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "4831efcf1d010d7256100f1659e32cb5f8a0be6c", "oa_license": null, "oa_url": "https://doi.org/10.18843/ijms/v5i3(8)/14", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4f418cc974998d02aabe1e9fe0d4be5097c1d34e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
13785243
pes2o/s2orc
v3-fos-license
How Stock of Origin Affects Performance of Individuals across a Meta-Ecosystem: An Example from Sockeye Salmon Connectivity among diverse habitats can buffer populations from adverse environmental conditions, influence the functioning of meta-ecosystems, and ultimately affect the reliability of ecosystem services. This stabilizing effect on populations is proposed to derive from complementarity in growth and survival conditions experienced by individuals in the different habitats that comprise meta-ecosystems. Here we use the fine scale differentiation of salmon populations between diverse lake habitats to assess how rearing habitat and stock of origin affect the body condition of juvenile sockeye salmon. We use genetic markers (single nucleotide polymorphisms) to assign individuals of unknown origin to stock group and in turn characterize ecologically relevant attributes across habitats and stocks. Our analyses show that the body condition of juvenile salmon is related to the productivity of alternative habitats across the watershed, irrespective of their stock of origin. Emigrants and residents with genetic origins in the high productivity lake were also differentiated by their body condition, poor and high respectively. These emigrants represented a substantial proportion of juvenile sockeye salmon rearing in the lower productivity lake habitat. Despite emigrants originating from the more productive lake, they did not differ in body condition from the individuals spawned in the lower productivity, recipient habitat. Genetic tools allowed us to assess the performance of different stocks groups across the diverse habitats comprising their meta-ecosystem. The ability to characterize the ecological consequences of meta-ecosystem connectivity can help develop strategies to protect and restore ecosystems and the services they provide to humans. Introduction There is increasing appreciation for how habitat complexity (including variation in geomorphic, chemical, and thermal properties) can buffer ecosystem function and the reliability of ecosystem services by promoting species and population diversity [1]. Ecosystems filter external climate forces differently such that they may offer higher or lower quality habitat depending on prevailing climate conditions. Over time, habitat conditions may vary inversely with one another producing a temporally variable mosaic of habitat quality on the landscape [2]. The mosaic of habitats on the landscape is not necessarily composed of discrete ecosystems but instead represents a network of heterogeneous habitats that can be conceptualized as a meta-ecosystem with movement of organisms, materials, and energy among component systems [ sensu 3]. Biological elements of the ecosystem respond to this heterogeneity, producing spatially variable species or population dynamics [4,5] and life history diversity [6]. Asynchronous productivity results in more stable aggregated dynamics than that of any individual species or population over time [7] and so too are the derived ecosystem properties [e.g. ecosystem productivity, 8] and services [e.g. fisheries , 9]. The availability of variation in habitat conditions not only facilitates the persistence of distinct populations but can also buffer a single population from environmental variability [10,11]. For example, butterflies in Britain show more stable population dynamics in landscapes with a broader suite of habitat types and topographic heterogeneity [12]. In order for populations to benefit from habitat heterogeneity, these habitats must be connected such that individuals are able to move among them [13]. Population dependence on different habitats is often associated with migratory species that make feeding, breeding, or overwintering migrations over large distances [described by 14]. Alternatively, connectivity among habitat types at small spatial and temporal scales allows individuals to move in order to negotiate short-term tradeoffs between food quantity and quality, density, optimal environmental conditions, and exposure to predation [15,16]. Life history diversity within a population can lead to the phenomena of partial migration [reviewed by 17] such that not all individuals move among alternative habitats. The relative proportion of migrants within a population over time may be reflective of the variation in relative habitat quality with higher migration rates associated with greater differences in quality [18] or environmental thresholds [19]. Anadromous Pacific salmon (Oncorhynchus spp.) are well known for their large scale migrations between freshwater spawning and rearing habitats and marine feeding habitat. Connectivity between the ocean and freshwater habitat, sometimes thousands of kilometers inland, is necessary for these species to complete their lifecycle. Anthropogenic activities, including dams, irrigation, urbanization, and logging, have threatened connectivity among these ecosystems in many regions [20]. However, during the freshwater rearing stage, connectivity at finer scales is also important for juvenile salmon to negotiate growth and survival trade-offs. In this context, salmon capitalize on heterogeneous habitat within a single lake or river system through population or individual movement. Population movements may indicate a seasonal change in productivity among habitats such as offshore movement by juvenile sockeye salmon [O. nerka, 21] or a balance between feeding opportunity, thermal conditions, and predator avoidance [e.g. diel vertical migration, 22,23]. Moreover, alternative movement strategies in salmon populations are common, with some individuals in the population occupying a single habitat during a life-stage while other individuals move among alternative habitats in response to habitat quality [24]. Habitats may offer tradeoffs between high resource quality and profitable abiotic conditions (e.g., temperature). Coho salmon (O. kisutch), for example, that forage in cold, food rich habitats but move to warm, food poor habitats to process food grow faster than individuals that do not move among habitats [25]. During their freshwater life history phase, juvenile salmon can not only exploit heterogeneous habitat within a single lake or stream but can move throughout watersheds. Some coho salmon exhibit an alternative strategy in which individuals migrate downstream into estuaries in their first year of life and then return upstream to overwinter in freshwater [26]. Similarly, juvenile steelhead (O. mykiss) have been shown to exploit estuarine connectivity during freshwater rearing without continued migration to the ocean within the same year [27]. Sockeye salmon also exhibit inter-lake migrations from high to low density lakes [28] or among lakes with very different abiotic conditions [29]. The attributes of movers in salmonid populations and the ultimate consequences for those individuals and their populations are context dependent. A variety of factors may influence an individual's propensity to migrate including competition, food availability, and population density [17] which may be reflected in their physical characteristics such as size or body condition. In some systems it appears that while movers and residents do not exhibit initial differences in physical condition, movers have higher growth rates upon moving to alternative habitats [24]. In other systems, individuals that become emigrants may be of lower or higher condition than residents depending on the environmental conditions in a given year [29]. Assessing the success of movers versus residents poses a challenge when individuals of one population immigrate into new habitats that are already occupied by a different population of the same species. This is likely to happen when migrants exploit habitat connectivity at the watershed scale. Furthermore, movers could have direct or indirect effects on the resident individuals in their new habitat. Evaluating the success of alternative movement strategies as well as the interactive effects among populations requires the identification of individuals to their population of origin. Population structure is often cryptic and only detectable with genetics or intensive tagging studies [30]. Genetic tools can provide a useful and less time-consuming alternative to tagging studies, particularly for large systems with high organism densities where recapture rates are low. Genetic techniques are particularly well-developed for Pacific salmon [31,32] due to substantial interest in population-level management at both the state and federal level. Furthermore, because of strong natal homing by spawning adults [33], salmon populations are highly differentiated at relatively fine spatial scales [34,35,36]. Specifically, single nucleotide polymorphisms (SNPs) have become a common and robust tool to allocate Pacific salmon of unknown origin to known spawning populations [37,38]. The Chignik watershed on the Alaska Peninsula provides the opportunity to investigate individual performance among alternative rearing strategies in a sockeye salmon metaecosystem. Freshwater life histories of sockeye salmon have historically differed between natal lakes in this watershed. Juveniles from Black Lake (upper watershed) spend one year in freshwater and individuals from Chignik Lake (lower watershed) spend two years in freshwater, reflecting the thermal conditions and relative productivity between the two lakes [39]. Downstream emigrations by a proportion of the Black Lake juvenile sockeye salmon population to Chignik Lake appear to be common in this meta-ecosystem, however [29,39,40]. Midsummer juvenile emigrations are only in the downstream direction, and Black Lake juvenile sockeye salmon emigrants spend the remaining portion of their freshwater residence in non-natal habitat [29]. In recent decades, median emigration dates ranged from mid-June to mid-July with the majority of the emigration concluded by the end of July [29]. Furthermore, downstream emigrants (captured downstream of the lake outlet) have a lower body condition than fish that remain in Black Lake throughout the summer [29]. Once these emigrants enter Chignik Lake, however, their performance in non-natal habitat is unknown. Furthermore, because fish sampled in Chignik Lake cannot be visually identified to stock, characterizing the body condition and growth of Chignik Lake stocks has been historically limited to scale pattern analysis which made assumptions about how growth differed between stocks. Recently, SNPs have been used in the Chignik watershed to assess stock specific characteristics in a common rearing environment during a single summer [2008,41]. Simmons et al. [41] found that a substantial fraction (33%) of the juvenile sockeye salmon rearing in Chignik Lake in mid-July were of Black Lake origin, which increased to 46% at the end of August. Simmons et al. [41] were able to compare the performance of individuals among habitats for a subset of the individuals sampled, but 45 SNP markers were only able to robustly assign 40% of the individuals captured. Here we build upon the work of previous studies and use the fine scale differentiation of salmon populations among diverse lake habitats on the Alaska Peninsula to assess how rearing habitat and stock of origin affect the body condition of juvenile sockeye salmon. We were able to robustly assign individuals of unknown origin to stock groups using a greater number of SNPs than previously available and in turn characterize ecologically relevant attributes across habitats and stocks. We addressed the following questions. 1) How variable is the stock composition of juvenile sockeye salmon in a common rearing environment (Chignik Lake) among years? 2) Does habitat quality differ among lakes as expressed by juvenile sockeye salmon body condition? 3) Is emigration from warm (Black Lake) to cold (Chignik Lake) summer habitat linked to body condition? 4) In a shared rearing environmental, what are the relative body conditions of natal (Chignik Lake) versus non-natal (Black Lake) individuals? Ethics Statement Sample collections and methods were permitted under Alaska Department of Fish and Game (ADFG) permits SF2010-094 and SF2011-121. All protocols complied with the University of Washington IACUC permit 3142-01. Study Site In the Chignik watershed, Alaska Peninsula, USA (Figure 1), sockeye salmon (O. nerka) are the numerically dominant anadromous species and support a valuable commercial fishery (average annual harvest 1.7 million since 1977, data from ADFG) and a local subsistence harvest. Sockeye salmon spawn in tributaries to both Black and Chignik lakes, rear in freshwater for 1-2 years, migrate to the ocean for 3 years on average, and then return to natal streams and lake beaches to spawn. The number of spawners (escapement) is tightly controlled by ADFG and was relatively constant during our study years. The escapement for the Black Lake stock was 391,474 in 2009 and 432,535 in 2010. In Chignik Lake, juvenile sockeye captured may be age-0 or age-1. Escapements producing the juvenile sockeye we sampled were 328,479 (2008), 328,586 (2009), and 310,634 (2010). The attributes of rearing habitat for juvenile sockeye salmon in the Chignik watershed are diverse. Shallow Black Lake (4 m max. depth) is a warm, turbid, and productive lake in the upper watershed. Black Lake is also experiencing geomorphic evolution on ecological time scales and has lost ,40% of its volume since 1960 [42]. In contrast, deep and cold Chignik Lake (60 m max. depth) downstream has maintained a stable volume over recent decades. Differences in sensitivity to air temperature reflect the geomorphic differences between the lakes. In our sample years, mean daily July and August surface water temperatures in Chignik Lake were 10.8uC (2010) and 10.7uC (2011) while in Black Lake they were 13.1uC (2010) and 12.6uC (2011). Furthermore, air temperatures have increased 1.4uC on average in the watershed between 1960 and 2005 [43]. Sample Collection 2010-2011 Juvenile sockeye salmon in Chignik and Black lakes were sampled at the end of August using townets. In 2010, sample dates were August 25 th and August 28 th in Chignik Lake and Black Lake, respectively. In 2011, samples were collected on August 24 th in Chignik Lake and August 25 th in Black Lake. Five sites on Chignik Lake were sampled using a 2 m62 m net, which was pulled at the lake surface between two boats for duration of 10 minutes per set. The same protocol was used to sample five Black Lake sites but a 1.2 m61.2 m net was deployed. If samples were large, a known fraction of the catch was retained. Fish were euthanized in a buffered MS222 solution and were returned to the lab for processing. Sockeye salmon were measured to the nearest mm (fork length) and weighed to the 0.1 g. Genetic samples were collected from Chignik Lake by removing the entire caudal fin. Sample tissues were pressed to gridded filter paper and air dried for later DNA extraction. The association between each fish's length, weight, and genetics sample was retained. Data deposited in the Dryad repository: http://dx.doi.org/0.5061/dryad.jn14d. Laboratory Analysis A subset of individuals captured in Chignik Lake was genotyped in 2010, and all captured individuals in 2011 were genotyped. In 2010, samples were grouped by lake section, north (2 sites) and south (3 sites), and 285 samples were selected from each. The majority of fish captured were between 61-70 mm in the north and 61-75 mm in the south. Because we believed that length may reflect stock at the tails of the distribution, the samples from all fish #60 mm and .70 mm were taken for analysis (n = 98) for the northern section. The remaining 187 samples were taken in random draws in proportion to the sample numbers in the remaining two 5-mm length bins. Similarly, in the south area, samples from all fish #60 mm and .75 mm were retained for analysis (n = 48). The remaining 237 samples were taken in random draws in proportion to the sample numbers in the remaining three 5-mm length bins. Genomic DNA was extracted following standard protocol with Qiagen DNeasy 96 Tissue Kits. Multiplex preamplification PCR was conducted to reduce error and failure rates in case of low concentrations of template DNA [44]. A 96 SNP panel was assayed using TaqMan reactions as in [45]. The 96 SNP panel included 3 mitochondrial SNPs and 93 nuclear SNPs now used in mixed stock analyses by ADFG [46]. The Fluidigm Biomark 96.96 was used to genotype the samples. For quality control, 8 out of every 95 individuals were reanalyzed to confirm that genotypes were reproducible and identify laboratory errors. Genetic Analysis ADFG provided the genotypes for the Chignik watershed baseline populations [46]. Monomorphic loci were identified and removed prior to further analyses. We followed the approach of Creelman et al. [36] for dealing with loci in linkage disequilibrium (LD). Tf_ex11-750 was dropped from the LD pair Tf_ex11-750 and Tf_in3-182. In case of the two MHC loci (MHC2_190 and MHC2_251), we treated them as phenotypic characters [47] to retain the information contained by both loci. The three mitochondrial loci were combined into a composite haplotype. All stock identification analyses were carried out using a Bayesian approach developed by Pella and Masuda [48] (BAYES). Baseline populations were pooled to 13 populations following Creelman et al. [36]. Five populations belonged to the Black Lake stock group, seven to the Chignik Lake stock group, and one to the Chignik River stock group (geographic extent shown in Figure 1). A uniform prior was used with each pooled baseline population given equal weight and the probabilities summing to one. For each mixture, 3 Monte Carlo Markov chains were run with randomized starting locations. Each chain had a length of 140,000 iterations with every 7 th sample retained for a total of 20,000 samples per chain (burn-in 10,000). This level of thinning was determined by the Rafterty-Lewis diagnostic [49] across multiple runs. A unique combination of starting stock proportions was used for each chain. Starting proportions of 0.3 were randomly assigned to 3 populations and the remaining 0.1 divided among all other populations. Mixture Allocations The relative contribution of each stock group to the unknown mixture sample was assessed using mixture allocation. BAYES established posterior densities of mixture proportions at the stock group level (Black Lake, Chignik Lake, Chignik River) for each chain. Convergence of the posterior densities among the chains was verified using the Gelman-Rubin diagnostic [50] and visual assessment. A 95% credibility interval, mean, and median stock group proportions were calculated for the combined chains for each mixture. In 2010, we genotyped all fish in the tails of the length distribution to ensure individual performance was well characterized across the length distribution. To avoid bias in our mixture allocation analysis, we used only the 61-75 mm fish (center of the distribution) randomly selected for genotyping from the south section of the lake (n = 237) in 2010. The majority of fish in 2010 (94.4%) were captured in the south section of the lake and 92% of these fish were from 61-75 mm in length. Therefore, we believe the stock composition of this random sample best reflects the lake wide composition. Individual Assignment The ability to robustly assign individuals to a stock group depends on the number of markers and the level of differentiation among reporting groups [51]. Our ability to assign individuals in the Chignik watershed has increased since past studies have been conducted due to the increase in the number of SNP markers available (96 rather than 45 as in Simmons et al. [41]). We assessed the individual assignment ability of the baseline by conducting tests using mixtures created from individuals from known baseline populations following the methods of Simmons et al. [41]. We randomly selected individuals from the baseline populations to create a test mixture of 200 individuals and generated a new baseline without the selected individuals. The representation of each stock group in the mixture reflected observed mixture allocations to stock group in 2010 and 2011 (25% Black Lake, 75% Chignik Lake). We repeated the randomization process 10 times each time generating test mixtures and baselines with the same stock group proportions. We then used BAYES to assign posterior densities of mixture proportions to stock groups (as above) as well as assign individuals to the 13 populations. For each individual in a test mixture, we summed the population level assignments by stock group. We then assessed the number of individuals assigned to each stock group at assignment thresholds ranging from 50 to 90% [41]. At each threshold level, we calculated the error rate by determining the proportion of individuals incorrectly assigned to that stock group. We calculated the mean error rate and standard deviation across all ten test mixtures by threshold and stock group. To determine the threshold to use for further analyses we sought to maximize the number of individuals assigned while minimizing the rate of incorrect assignment. Assignment of unknown individuals to stock group in 2010 and 2011 was conducted using BAYES as previously discussed. We used the 80% threshold to assign individuals to a reporting group based upon the analyses above. This allowed the use of individual attributes (length, weight, condition) of each fish to define the attributes of each stock group by rearing environment and movement status. Stock of Origin, Rearing Lake, and Body Condition For individuals assigned to either the Chignik Lake or Black Lake stock as described above, we tested for differences in length between three combinations of stock of origin and location of capture: natal rearing environment of different stocks (Black Lake residents and Chignik Lake residents); emigrant/resident status of the same stock (Black Lake emigrants and Black Lake residents); and common rearing environment but different stocks (Black Lake emigrants and Chignik Lake residents). Within the comparison between Black Lake emigrants and Chignik Lake residents, there was a third group, which were the individuals not assigned at the 80% threshold. Given highly unequal sample sizes for most of the comparisons, we first tested for homogeneity of variances using Bartlett's test [52]. If variances were homoscedastic, we used Analysis of Variance (ANOVA) while if they were heteroscedastic we used the non-parametric Kruskal-Wallis test [52]. To explore the differences in the length-mass relationship and the relative body condition of the above pairs, we assessed four alternative regression models to predict fish mass. This approach is consistent with previous work in the watershed [41] and was suggested by Cone [53] as the preferred way to evaluate fish condition. In the first model to compare stocks rearing in their natal lake (Black Lake residents and Chignik Lake residents), all individuals (j) belonging to the stock groups (i) shared a slope and an intercept relating mass to length. The second model had different intercepts by stock group but the same slope while the third model had the same intercept but different slopes. The final model had different intercepts and slopes for each stock group. ln mass 2j À Á~b 02 z b 12 : ln length 2j À Á The Black Lake emigrant versus resident comparison (Black Lake emigrants and Black Lake residents) used the same model approach where individuals were grouped by location of capture instead of stock group. Finally, the shared rearing environment comparison (Black Lake emigrants and Chignik Lake residents) were compared using the same model framework. In these two comparisons samples sizes were unequal because of the few Black Lake origin individuals identified in Chignik Lake. Models were compared using Akaike Information Criteria for small sample sizes (AICc) [54]. Additionally AIC weights (w i ) [54] were calculated for each model within a comparison. Given the suite of models considered, each w i is the estimated probability that the given model is the best model for the data. These analyses included fish that were individually assigned to a reporting group at the 80% level. To test the robustness of our results to the assignment threshold used, we compared our results to those obtained when using a 70% (less conservative) or 90% (more conservative threshold (Table S1, Figure S1, Figure S2, and Figure S3). Analyses were conducted using R statistical software [55] including the package ''AICcmodavg'' [56]. Sample Collection 2010-2011 In Chignik Lake, 1000 juvenile sockeye salmon were sampled for length, mass, and fin clip in 2010 and then later sub-sampled for genotyping. In 2011, catch rates were lower, and all sockeye salmon caught at all sites were retained for later analysis (n = 233). In Black Lake, juvenile sockeye sample sizes were 341 and 770 in 2010 and 2011, respectively. Laboratory & Genetic Analysis Five hundred-seventy fish were genotyped from 2010 samples and 233 fish were genotyped from 2011 samples. The assay for the SUMO1-6 locus failed for all samples and was excluded from the analysis. In 2011 the locus U1016-115 was also excluded due to assay failure. Two loci were monomorphic (metA-253, txnip-401) across the Chignik populations and were not used in further analyses. Additionally, in 2010 two fish were missing genotypes for at least 15% of the loci and were excluded. Individual Assignment Individual assignment of mixtures comprised of known individuals demonstrated that the 80% threshold assigned a substantially larger number of individuals than the 90% level and still retained low error. At the 80% threshold, on average 75% of the individuals in the mixture were assigned to either Black Lake or Chignik Lake stock groups. The mean error rate for individuals assigned to Black Lake was 11% (SD 67%) while the Chignik Lake error rate was 4% (SD 62%). At the 90% threshold, 59% of individuals were successfully assigned to a stock of origin and there was a greater decrease in the proportion of fish assigned to Black Lake as opposed to Chignik Lake. Mean error rates at the 90% threshold were 3% (64%) for Black Lake and 3% (62%) for Chignik Lake. Overall, we were able to assign 78% and 80% of individuals to a stock group at the 80% threshold for 2010 and 2011, respectively ( Table 1). The majority of individually assigned fish were from the Chignik Lake stock due to their numerical dominance in the mixtures in both years. Of the 568 individuals sampled in 2010, 34 were assigned to Black Lake and 416 were assigned to Chignik Lake. In 2011, 31 of 233 individuals were assigned to Black Lake and 150 to Chignik Lake. We used the individual assignments at the 80% threshold to assess the length distributions and relative body condition of juvenile sockeye salmon among stocks and rearing lakes. Our analyses show that lake rearing habitat strongly affects juvenile sockeye salmon body condition. Differences in body condition differentiated emigrant (low condition) versus resident (high condition) individuals within a single stock group (i.e., from Black Lake). Despite emigrants originating from the more productive lake, they did not differ in body condition from the individuals originating the lower productivity, recipient habitat. While populations exploit diverse habitats, these habitats differ in productivity, and emigration may not improve attributes such as body condition. Comparing Two Natal Lakes: Black Lake Residents Versus Chignik Lake Residents In both years, there were significant differences in length between stocks rearing in their natal lakes (2010: df = 1, K-W x 2 = 89.4743, p,0.001 2.2610 216 ; 2011: df = 1, K-W x 2 = 134.0431, p,0.001). In 2010, Black Lake residents were longer ( x x = 69.7 mm, sd = 5.4) than Chignik Lake residents ( x x = 65.1 mm, sd = 7.9), however the reverse was true in 2011 (Black Lake residents: x x = 64.0 mm, sd = 5.4; Chignik Lake residents: x x = 70.5 mm, sd = 8.2). There were clear differences in body condition among individuals rearing in their natal lakes. Black Lake residents were of higher body condition in both 2010 and 2011 than Chignik Lake residents (Figure 3). In 2010, there was strong support for the different slope and intercept model (w i = 1.00). This is probably because the Black Lake residents had a much narrower length range than Chignik Lake residents and small Chignik Lake residents had very low body condition. In 2011, the support was strongest for a different intercept and same slope model, but there was also similar support for models with either different slopes or different intercepts (Table 2). Home Versus Away: Black Lake Residents Versus Emigrants In 2010 there was a significant difference in length between Black Lake emigrants and residents (df = 1, Kruskal-Wallis (K-W) x 2 = 12.0891, p = 0.005) in which Black Lake residents were longer than individuals that had immigrated to Chignik Lake (Black Lake emigrants; x x = 64.2 mm, sd = 9.5). No difference in length was detected among emigrants and residents in 2011 (Black Lake emigrants; x x = 65.2 mm, sd = 7.7). In both 2010 and 2011, Black Lake emigrants were of lower body condition than Black Lake residents (Figure 4). In 2010, there was strong model selection support for a model with different intercepts and slopes (w i = 0.75) likely driven by the low body condition of smaller Black Lake emigrants. In 2011, there was no support for the null model but relatively similar support for the other three models ( Table 2). Locals Versus Migrants: Chignik Lake Residents Versus Black Lake Emigrants We found significant differences in 2011 among-group lengths (ANOVA, df = 230, F = 7.1867, p = 0.001) but not in 2010 (ANOVA, df = 565, F = 0.4104, p = 0.66331) (mean lengths provided in above sections). A Tukey test for multiple comparisons indicated that in 2011 Black Lake emigrants were significantly smaller than Chignik Lake residents (p = 0.004) but there were not significant differences between either group of known origin and unassigned individuals captured in Chignik Lake. In 2010 there was strong support for a model describing the relationship between length and mass with different slopes and intercepts by natal origin (Table 2). Small Black Lake emigrants had a higher body condition than small Chignik Lake residents ( Figure 5) . As length increased, however, Chignik Lake residents increased in mass more rapidly than Black Lake emigrants. In 2011, there was little visual difference between stocks in their body condition and no model showed substantially stronger support than the shared slope and intercept model ( Table 2). Discussion Our mixture analyses showed that juvenile sockeye salmon spawned in Black Lake tributaries made up a substantial but variable proportion of the fish that were rearing in Chignik Lake by the end of the growing season when compared to a survey from 2008 [41]. Using individual genetic assignment to stock of origin, we characterized the body condition of juvenile sockeye salmon residents in their natal lakes as well as those that immigrated to new habitat. Individuals from Black Lake that were rearing in their (Table 2). doi:10.1371/journal.pone.0058584.g003 natal habitat were in substantially better body condition than Chignik Lake fish rearing in their natal, less productive habitat. Juvenile sockeye salmon that emigrated from Black Lake to Chignik Lake tended to have lower body condition near the end of their first growing season than individuals that stayed in their natal Black Lake habitat. Finally, within the common rearing environment of Chignik Lake, fish of Black Lake and Chignik Lake origin had similar body conditions, and the subtle differences detected were size-dependent in the year they were statistically significant. Residency in productive, warm Black Lake led to highest body condition for juvenile sockeye salmon observed throughout the Chignik watershed. This result likely reflects the differences in ecosystem productivity between Black Lake and Chignik Lake. Further, high body condition of fish rearing in Black Lake may indicate that successful Black Lake residents are able to achieve critical length thresholds earlier in the season and switch to an energy allocation strategy that favors overwinter survival by allocating more energy to storage rather than further growth in length [57]. Mean length comparisons between Black Lake residents and Chignik Lake residents produced opposite patterns in 2010 and 2011. We think this is likely caused by changes in the Chignik Lake age composition (relative proportions of age-0 and age-1) rather than by differences in lake productivity among years. Age composition data were not collected, however. Poorer body condition emigrants from Black Lake were always present in Chignik Lake but made up a variable proportion of the juvenile sockeye salmon. While credibility intervals show a slight overlap between 2010 (Black = 3.1-18.6%) and 2011 (Black = 16.5-34.1%) these proportions are quite different from those observed in August 2008 (Black = 37-56%). Westley et al. [29] showed that Black Lake emigrants were of lower body condition when departing Black Lake in early to mid-summer than Black Lake residents. We show that these individuals continue to have lower body conditions in alternative rearing habitat. Given the emigration timing reported for recent decades [29] as well as the substantial fraction of emigrants observed in Chignik Lake in July by Simmons et al. [41], we believe that emigrants have likely spent a month rearing in Chignik Lake and that their body condition is reflective of Chignik Lake growth conditions. Their convergence on Chignik Lake growth potential is also reflected in the shared body condition with Chignik Lake residents in the common rearing environment in 2010 and 2011 [consistent with 41]. Interestingly, while earlier observations of poor condition Black Lake emigrants occurred during the extremely warm summers of 2005 and 2006 [29], we show that this also occurs during more average climate conditions. Mean Black Lake temperature from June 12 -August 26 was 12.6uC and 12.1uC in 2010 and 2011, respectively, and the maximum temperature was 15uC. These temperatures were substantially cooler than when poor body condition emigrants were observed in 2005 and 2006. In those years, the mean water temperatures over the same period were 14.1uC and 12.4uC with maximum temperatures reaching over 17uC in both years. If sockeye salmon are feeding at maximum Table 2). doi:10.1371/journal.pone.0058584.g004 consumption, the optimal temperature for growth is 15uC [58], however, if food is limited optimal growth temperatures are lower. Therefore, the coolest temperatures of the last decade may be at optimal growing conditions in Black Lake while the warmest years are likely sub-optimal for much of the population. However, our results indicate that conditions are limiting for growth in Black Lake for at least a fraction of the population in Black Lake even during cool summers. For these individuals, emigrating downstream may offer benefits even though growth potential in Chignik Lake is lower. These cooler temperatures in Chignik Lake, although reducing the scope for growth, may also reduce metabolic stress and potentially improve survival. A longer growing period in fall due to Chignik Lake's large thermal mass may also provide growth opportunities unavailable in Black Lake in the fall. Finally, it is unclear whether Black Lake emigrants ultimately show differences in freshwater rearing duration. Given little differences in length with Chignik Lake individuals (some of which are age-1), Black Lake emigrants may achieve sufficient length to smolt in the following spring or they may rear an additional year in freshwater. The relationship between condition of downstream emigrants and the duration of freshwater rearing could be important for quantifying the importance of emigration for survival. Based on ADFG brood tables, however, there appears to be no large scale shifts in Black Lake freshwater age composition seen in returning adult sockeye between 1922 and 2010 (ADFG, unpublished). The proportion of juvenile sockeye of Black Lake origin in Chignik Lake is a function of both the downstream emigration rate and the production of sockeye salmon in Chignik Lake. With only three years of observation our inferences about what causes variation in the contribution of Black Lake fish to the juvenile population in Chignik Lake are limited. We found no relationship between the proportion of Black Lake juvenile sockeye in Chignik Lake and either Black Lake temperature or the ratio of Black Lake to Chignik Lake adult spawners in the previous year. One hypothesis is that in warm years Black Lake is more stressful [59], which increases the downstream emigration rate. Similarly, greater competition during years of high densities in Black Lake could lead to increased emigration downstream. Temperature variation was very low between our study years and 2008, as was the adult escapement in the preceding years, however. In Chignik Lake, newly emerged fry are particularly susceptible to predation by coho salmon [60] and variation in predation pressure among years could alter late season stock composition in Chignik Lake. Furthermore, while sockeye dominate the pelagic fish community in Chignik Lake, the community composition has become less sockeye dominant in recent decades [61] and this could alter interspecific interactions and the opportunities for growth by Chignik Lake populations. Given the two year duration of freshwater rearing for Chignik Lake stocks, changes in predation or competition may affect the age composition and stock composition in Chignik Lake in subsequent years. Our ability to make inferences about the attributes of a stock group depends on the success of our individual assignment. While we successfully assigned 78-80% of the individuals in our sample at an 80% probability threshold, there may be some underlying bias in the subsequent analyses based upon the individuals we were able to assign. A review of our known mixture error rate tests, however, showed that there were not differences among populations in the likelihood of not being assigned at the 80% probability threshold. Additionally, we must be cautious when comparing mixture allocations generated using different numbers of genetic markers. In this case, differences among the proportion of Black Lake individuals observed in 2008 using 45 SNP markers and the proportions observed in 2010 and 2011 using 96 SNP markers may not be directly comparable. Instead, differences may be exacerbated or dampened by different levels of stock group differentiation between marker sets as well as the different genotypes that may be present in the samples among years. The identification of individuals to their population of origin is essential to our ability to assess the role of migration and habitat connectivity across multiple scales of ecological organization. Emerging genetic tools offer a robust approach for investigating the presence and attributes of multiple populations within a metaecosystem. For species or regions where tagging studies face many logistical challenges, genetic markers provide an alternative approach that is relatively economical and efficient at tracking the stock identities of mixed-stock populations. The consequences of movement and emigration for individuals, populations, and ecosystems can be profound. Moving to new habitat may improve growth rates over similar sized individuals [24] or allow inferior competitors the opportunity to improve growth rates [18]. Assessing the effects of alternative movement strategies on individual condition is a first step to evaluating the fitness consequences of these strategies. As habitats vary in their productivity among years, rates of migration among habitats may vary as well as the contribution of migratory individuals to population productivity [18,19]. Migration or movement at one life stage may also alter the probability of later life history outcomes. For example, Hamann and Kennedy [62] found that juvenile Chinook salmon dispersal was related to the probability that adults would spawn in non-natal habitats. At the population level, this could affect the relative differences among populations and their fitness as well as the size of the reproductive population. Ultimately, the movement of individuals among connected habitats may drive the function and properties of meta-ecosystems by influencing trophic pathways [19] or the flux of materials among systems and in turn creating a feedback to the success of individuals and populations. Our results highlight the importance of connectivity among the habitats that comprise a meta-ecosystem for juvenile salmonids. In the Chignik watershed, it has become apparent that Black Lake, while a more productive habitat than downstream Chignik Lake, can become unfavorable for juvenile sockeye salmon as the growing season progresses [59,63]; this effect is particularly pronounced when lake temperatures are warmer than average [29]. Biologically compromised individuals tend to be the ones that emigrate from Black Lake [29]. Through the application of modern genetics tools, this research showed that lower performance by Black Lake emigrants continues even after moving into new habitat. The development of landscape genetics [64] has mostly focused on how the physical dimensions of landscapes affects microevolutionary processes. However, this study is one example where landscape genetics shed new perspectives on ecological processes such as migration and condition of migrating individuals. Only by understanding how individuals respond to diverse landscapes can we scale up to understanding the relative importance of different configurations of habitat networks to populations and ecosystems. Combining landscape genetics with meta-ecosystem perspectives will likely be a powerful approach for developing effective strategies for protecting and restoring habitats and their connectivity. It is becoming increasingly recognized that the connectivity of diverse habitats is important for maintaining resilient populations and the variety of ecosystem services and products they provide to people. Figure S1 Black Lake resident versus Chignik Lake resident body condition using alternative individual assignment probability thresholds. Analyses were conducted using individuals assigned to stock of origin at both the 70% and 90% assignment probability thresholds. Data presented as in Figure 3. (TIF) Figure S2 Black Lake resident versus Black Lake emigrant body condition using alternative individual assignment probability thresholds. Analyses were conducted using individuals assigned to stock of origin at both the 70% and 90% assignment probability thresholds. Data presented as in Figure 4. (TIF) Figure S3 Black Lake emigrant versus Chignik Lake resident body condition using alternative individual assignment probability thresholds. Analyses were conducted using individuals assigned to stock of origin at both the 70% and 90% assignment probability thresholds. Data presented as in Figure 5.
2016-05-04T20:20:58.661Z
2013-03-07T00:00:00.000
{ "year": 2013, "sha1": "e2b16c083e7050a5f885211999ec9885b2688a02", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0058584&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2b16c083e7050a5f885211999ec9885b2688a02", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
212777106
pes2o/s2orc
v3-fos-license
A systematic literature review on uncertainties in cross-docking operations Purpose – The technique of cross-docking is attractive to organisations because of the lower warehousing and transportation (consolidated shipments) costs. This concept is based on the fast movement of products. Accordingly, cross-docking operations should be monitored carefully and accurately. Several factors in cross-docking operations can be impacted by uncertain sources that can lead to inaccuracy and inefficiency of this process. Although many papers have been published on different aspects of cross-docking, there is a need for a comprehensive review to investigate the sources of uncertainties in cross-docking. Therefore, the purpose of this paper is to analyse and categorise sources of uncertainty in cross-docking operations. A systematic review has been undertaken to analyse methods and techniques used in cross-docking research. Design/methodology/approach – A systematic review has been undertaken to analyse methods and techniques used in cross-docking research. Findings – The findings show that existing research has limitations on the applicability of the models developed to solve problems due to unrealistic or impractical assumption. Further research directions have been discussed to fill the gaps identified in the literature review. Originality/value – There has been an increasing number of papers about cross-docking since 2010, among which three are literature reviews on cross-docking from 2013 to 2016. There is an absence of study in the current literature to critically review and identify the sources of uncertainty related to cross-docking operations. Without the proper identification and discussion of these uncertainties, the optimisation models developed to improve cross-docking operations may be inherently impractical and unrealistic. Introduction Over recent years, competition between companies forced them to cut costs to remain in the market. Cross-docking, which refers to direct shipment of receiving products from inbound trucks to the outbound trucks, is a just-in-time and lean system of distribution, which makes an essential contribution to the rapid movements of goods (Nassief et al., 2016). This approach of distributing products helps reduce costs and leads to better service to the customers. Distribution of products in an efficient way along supply chain is a complex task that needs a careful attention to address a large number of challenges such as uncertainties, just-in-time and cost-effective distribution (Dulebenets, 2019). Consequently, many businesses try to address these challenges by using cross-docking, but cross-docking operations are influenced by the dynamic nature of the business. Cross-docking operations consist of receiving of inbound trucks and assigning them to the doors of cross-docking centre and the same for shipping trucks and doors. Method for literature review The objectives of this literature review are to examine the studies in cross-docking under uncertainty so that all possible sources of uncertainty can be identified and the limitations of existing studies can be discussed. To achieve this objective, a systematic literature review (SLR) was conducted. To carry out a literature review, a wide range of research should be studied. However, it is impossible to consider all studies unless it is a new field (Seuring and Müller, 2008). To define the area of research, selection criteria and research steps to produce a better review of literature, SLR guidelines are adopted. A SLR can be divided into four stages (Denyer and Tranfield, 2009;Tranfield et al., 2003) including planning, conducting a review, analysis and presenting the findings. The planning process in SLR To develop a coherent flow, the gaps in the literature need to be identified and discussed. To present a comprehensive literature review of cross-docking under uncertainty, the following questions are framed to guide the literature review: • Which decision levels are considered? • What uncertainties are considered? • What performance measures are discussed? • What methodology is used? • What are the limitations? 2.1.1 The searching and screening process in SLR. Boolean logic was used to define the keywords for the search. The following keywords were selected: "cross-dock*" AND "uncertainty" AND "supply chain". After determining the keywords, eight databases were identified and selected including Scopus, web of science sciencedirect, Emerald, Wiley Online, Springer Online, Taylor & Francis and ProQuest. Google Scholar was used as a separate database. The period for the data search was set from 1980. According to Krajewski et al. (1999) and Apte and Viswanathan (2000), the cross-docking approach started from the 1930s. However, it only became popular from the 1980s after the successful experience of Walmart. In addition, we excluded the strategic level because these studies tend to focus on infrastructure and facilities development prior to the construction of cross-docking centres. Other inclusion criteria were that the research was written in English and the document was either a published paper, a thesis, a book or a chapter. After applying these rules, 1,351 items were found. The list was then checked for duplication which resulted in 234 items being excluded. In the screening process, the authors read the title, abstract and conclusion of the remaining studies and excluded studies that did not have uncertainty in abstract and conclusion. This process resulted in 1,079 being removed and 38 remained. In addition to the database search, a snowball approach was used to avoid the possibility of missing relevant papers. The searching and screening process resulted in 46 papers which have been included in this literature review. 2.1.2 The analysing process in SLR. In evaluating the selected studies, the approach suggested by Tranfield et al. (2003) was used. Each study was evaluated using descriptive and thematic analysis (Table I) Figure 1). There has been an increase number of studies from 2012. In terms of the research context of these studies, a majority of studies were from developed economies with the USA having the greatest number ( Figure 2). Among these published studies, a third were published in journals, about a quarter were thesis and over 40 per cent were conference papers (Figure 3). 3. Thematic findings: uncertainty components in cross-docking centres operations In this step, all research items were reviewed according to the components of uncertainties. Following the discussion below, tables are presented to summarise the essential features of each study. The papers were categorised based on the sources of uncertainty as shown in Table II, and information on the performance measures used in these studies is provided in Table III. Table IV summarises the solution methods. Table IV is presented. Based on an analysis of the reviewed studies, a framework is developed to illustrate the composition of uncertainty components in cross-docking operations ( Figure 4). External uncertainty components In this part, each component of research is analysed in detail according to the external uncertainty component. 3.1.1 Demand. Demand is one of the main factors of uncertainty in supply chain environment. Most businesses are faced with the challenge of accurately predicting customer needs in terms of product type, quantity and timing of delivery. The inability or inaccuracy in predicting demand has a flow-on effect on cross-docking operations. Existing literature on cross-docking only considered the impact of demand uncertainty on network leaving the effect of cross-docking operations unaddressed. According to Yan and Tang (2009), demand uncertainty can have a negative impact on system performance in terms of total expected cost. The impact can be decreased by employing pre-or post-distribution strategies. According to the results, pre-distribution is preferred when demand is stable. However, in a situation where the demand is uncertain, post-distribution is preferred. Pre-distribution has less impact on cross-docking operations because suppliers have done all necessary preparation, while in postdistribution the process of preparing happens inside the cross-docking centre leading to high operation costs. A weakness of Yan and Tang (2009) is that the pre-and postdistribution strategies were evaluated in isolation from other problems such as scheduling and dock-door assignment in DC which may affect the outcomes of the distribution strategies. Using a robust optimisation model, Spangler (2013) addressed the demand uncertainty from a strategic level through location selection for the cross-docking centre to ensure that the centre can handle changes in demand caused by seasonal fluctuation and adverse weather conditions. The outcomes of Spangler's (2013) research may be helpful for the initial planning of a cross-docking centre but less relevant to the operation of the centre. Inability in prediction of demand can lead to a delay of trucks at cross-docking centres and more gas and carbon emissions Rodriguez-Velasquez et al., 2010). Arnaout et al. (2010) considered demand, lead-time and service time as stochastic parameters, which improved the results by reducing the use of unrealistic constraints in their models. The results indicate that truck utilisation can be decreased by using cross-docking centres and larger trucks when demand is uncertain. However, MSCRA Arnaout et al. (2010) assumed that cross-docking centres have infinite space, and loading and unloading delays are negligible, which is unrealistic. 3.1.2 Supply. Uncertainty in supply is one of the disruption factors in operation of distribution centres. In order for a distribution centre to deal with the negative impact of supply uncertainty, large amount of inventory is required. This contradicts with the aim of DCs and cross-dock centres. The other reason for uncertainty in supply is because retailers tend to request for shorter delivery times increasing the pressure on both manufacturers and distributors. The inability of cross-docking centres in distributing the products to manufacturers or retailers on time is caused by the high volume of transactions along the supply chain (Cattani et al., 2014;Shi et al., 2013). It is vital for distributors to have proper access to accurate information derived from suppliers. This can help distribution centres to develop proper plans to manage their resources. The literature in cross-docking often assumes that the supply is always stable leaving the impact of supply uncertainty on sequencing and scheduling in cross-dock centres unaddressed. According to Cattani et al. (2014), different customers request different products at various times. Some of these are supplied by distribution centres and cross-docking centres, and others are provided through direct shipments. Resupply of these orders is sometimes delayed. Also, uncertainty in supply is one of the reasons for an increase in supply cost. Cattani et al. (2014) aimed to help the online retailers to reduce the expenses of resupplying and short delivery. The results show that a cross-docking strategy can help reduce the penalties for delays in resupplying. This study only considered cross-docking from the demand and supply viewpoint without considering scheduling and assignment of trucks. Shi et al. (2013) indicate that in order to control disruptive events such as supply shortage, three factors should be optimised. In storage space, dwelling time (staying time) of parts together with the number of pieces stays exceeding the threshold time should be minimised. In addition, along with the two previous factors, throughput should be maximised. A main weakness of this study was they considered temporary storage as infinite (Shi et al., 2013). 3.1.3 Arrival time. The literature about uncertainty in cross-docking shows that managers consider arrival time uncertainty as one of the most critical factors that can have a negative impact on the planning and scheduling of cross-dock centres (Boysen and Fliedner, 2010;Ladier and Alpan, 2016a). In cross-docking literature, most of the researchers assumed that arrival time is constant and that all trucks are available at the time of zero, which is not realistic. Receiving and shipping trucks in the real environment have a release and due time which should be monitored carefully to reduce the overall cost associated with earliness and tardiness. Boysen and Fliedner (2010) identified several factors such as traffic and engine failures that can delay the arrival time of trucks. Uncertainties in crossdocking operations Monitoring the arrival time of trucks and scheduling both receiving and shipping trucks can improve the efficiency of transhipment. The operation of cross-docking centres should be dynamic and practical. Although static environment can be a starting point to explore a research area, in order to improve the cross-docking operation in functional form, dynamic situations should be considered in research. One of the first studies in the cross-docking dynamic was presented by Konur and Golias (2013a). The authors pointed out that arrival time of trucks needs careful observation and using the prediction method is not a proper way to reduce these uncertainties. Online scheduling or scheduling on a rolling planning horizon can help practitioners obtain better information on the arrival time of trucks. However, a large amount of data and uncertainty in cross-docking operations can make the scheduling process more complicated (Boysen and Fliedner, 2010;Konur and Golias, 2013a;Van Belle et al., 2012). Konur and Golias (2013a) considered only the inbound side of a cross-dock centre to minimise the total waiting time for trucks with consideration of risk averse. The model provided four perspectives. The deterministic perspective disrespects the possible earliness and tardiness while pessimistic perspective is a risk averse method and uses the worst probability distribution function on arrival time. The optimistic perspective works on the best possible distribution for arrival time and hybrid cases. Konur and Golias (2013b) also conducted a study to minimise costs associated with the arrival time of trucks on the inbound side of cross-docking centres. This method was compared with a first-come-first-served policy. In this study, the probability distribution of the arrival time of trucks was not considered, and temporary storage space was zero. In continue of research provided by Konur and Golias (2013a), Heidari et al. (2018) performed a bi-objective bi-level optimisation to schedule and allocate trucks. Different from Konur and Golias's (2013a) study, Heidari et al. (2018) considered the outbound side as well. The arrival time of trucks was uncertain, but a time window was defined for truck arrival. To improve usability, Ladier and Alpan (2016b) developed a model to address the frequent disruptions in the scheduling of trucks in cross-docking centres. However, a weakness of their study is that the limits of the temporary storage are not considered. In order to reduce the long waiting times at the gates and yards, management of arrival time is vital. H. explained that reducing the waiting times caused by delays in arrival time of trucks can increase efficiency. To reduce the negative impact of uncertainties, one of the practical measures is a truck appointment system. This method can monitor the planning of arrival times by assigning an appointed slot to each truck, which, in turn, minimise truck deviation time. Although H. considered the limitation of resources and doors, the limitations of temporary storage and yard space were not considered. The above-discussed studies considered the uncertainties in arrival time of receiving trucks. The arrival time of shipping trucks is equally important can impact cross-docking operations. The first study about uncertainties in the arrival time of shipping trucks was presented by Zaerpour (2013) and Zaerpour et al. (2015). The authors argued that when trucks arrive outside the time window, the risk of reshuffling with shared storage will increase. Reshuffling time in this system can be increased because of improper assignment. First come, first serve (FCFS) can increase the possibility of reshuffling. Accordingly, uncertainties in truck arrival times can decrease the accuracy of defined time windows which leads to reshuffling and increase in cross-docking operations costs. Reducing the cost associated with reshuffling, arrival time of trucks needs a proper time window for the arrival time of shipping trucks. It is also interesting to consider the probability of facilities breakdown. Queuing systems can help manage the waiting time of trucks in cross-docking centres. To improve the system, Motaghedi-Larijani and Aminnayeri (2017) proposed a model to examine the arrival time of single outbound trucks as random with uniform distribution. A queuing model was developed based on a situation where the expected waiting time of customers is considered. The aim of this paper was minimising the total admission and waiting time cost. However, the research only used one door and one side of the arrival time, which limited the applicability of the model. By considering the arrival time of truck as a deterministic factor and a certain parameter, literature about cross-docking is far from the reality in the industry. Arrival time of trucks can be the starting source of uncertainty in cross-docking operations. Accordingly, Motaghedi-Larijani and Aminnayeri (2018) considered arrival time of trucks following beta probability distribution and applied queuing model in this problem. They calculated the waiting times of customers based on the delay that happened in arrival time. 3.1.4 Availability of trucks. The availability of trucks which is related to the external suppliers can impact planning and scheduling of resources. When proper resources are not available it impacts all products scheduled for delivery to customers. This factor includes both the inbound and outbound sides of the cross-docking centre operations. In addition, trucks can fail during the delivery of products to cross-docking centres or retailers. If the availability of trucks is disrupted, there is a need for reallocation of all orders and resources to fulfil the scheduled delivery. Amini and Tavakkoli-Moghaddam (2016) developed a model that considered truck breakdown during service time. The breakdown of trucks followed a Poisson distribution. The objective of this paper was minimising the total weighted completion time or tardiness of outbound trucks. This paper only considered the outbound process. All of the trucks were available at the time of zero, which is impractical, and the temporary storage capacity is infinite. 3.2 Internal uncertainty components 3.2.1 Processing time. Processing of inbound and outbound trucks is prone to uncertainty. Delay in fright handling can prolong the distribution process in the whole system. There are several factors that can impact the processing time of cross-dock centres. For instance, loading and unloading of trucks can be impacted by skills of the workforce in terms of the time that people need for doing the same job. This process can disrupt the flow of products in cross-dock centres. The loading and unloading and transferring time for different types of products is also different that can influence on planning. Accordingly, Wang and Regan (2008) suggested that using real-time information to schedule the unloading of receiving trucks can decrease the total freight transfer time. Therefore, they focussed on the effect of new receiving trucks on overall transhipment time. One weakness of this study is that it did not consider both inbound and outbound sides. It is important for cross-dock operations from a practical viewpoint to focus on unloading, loading and waiting time of trucks. McWilliams (2009) conducted a study into the processing time inside cross-docking centres to minimise total transfer time. A dynamic load-balancing algorithm was designed. The process of unloading trucks and assignment of trucks to doors was updated after unloading each truck. The study assumed that all shipping and receiving trucks were available at the time of zero, which is not realistic. In addition, the priority of each truck was not considered. According to Sathasivan (2011), unloading and loading of trucks can be overestimated or underestimated. Both can impact the optimal solution. Therefore, it is pivotal to consider the uncertainty in unloading time of trucks. As a result, stochastic and robust optimisation approaches were implemented. Sathasivan (2011) minimised weighted completion time to determine the optimal schedule for unloading receiving trucks. The study assumed that trucks were available at the time of zero and that the cross-docking centre had only one receiving and one shipping truck, which was far from a real environment. Uncertainties in crossdocking operations 3.2.2 Available resources. Material handling is the core of operations and includes the most expensive operations in cross-docking. Unloading, transferring, consolidating, splitting of orders and loading during the operation of cross-docking rely on labours and available resources. Therefore, this costly operation needs to be carefully monitored to reduce cost and increase utilisation. Shakeri et al. (2012) developed a model to address the delays caused by forklift breakdown inside the cross-dock centre. The model may be improved through assessing the probability of forklift breakdowns. From a different perspective, Soanpet (2012) studied the effects of capacity uncertainty on the location of cross-dock centres to minimise the total routing cost. Capacity can impact on the number of products that can be handled in the centre. However, their study did not consider limited temporary storage and truck arrival time. Zouhaier and Ben Said (2017a) argued that increasing the available resources can increase the performance of cross-dock centre and decrease the completion time at the same time. They presented a multi-agent-based truck scheduling model to coordinate the arrival and gate process and the availability of human resources inside the cross-docking centre. They considered available human resources with different abilities, but did not consider temporary storage inside the cross-docking centre. 3.2.3 Departure time. The departure time of trucks is one of the uncertainty components that can be resulted from internal and external sources. It can absorb other uncertainties such as arrival time and service time. This situation becomes more challenging when the trucks on the inbound and outbound sides have a deadline. Assignment of trucks to doors is one of the critical decisions in cross-docking operations. With restricted truck departure time, M.K. Acar (2004) studied dock-door assignment to minimise the distance travelled inside the cross-docking centre to deliver products to shipping doors. The authors assumed that shipping trucks were always available at shipping docks and temporary storage was not considered, which is not realistic (Acar et al., 2012). Literature about departure uncertainty is limited and requires further attention. Studies in the area of flight routing and scheduling with departure uncertainties in air traffic management may be a good starting point for developing solutions in cross-docking operations. Multiple uncertainty components Multiple uncertainties can exist during cross-docking operations. For the purpose of discussion, research that considered more than one uncertainty components is grouped into this category. Inaccuracy in arrival time and content in trucks can lead to uncertainty in processing time. Yu et al. (2008) presented an online method to solve dock-door assignment problems. The authors considered uncertainties in arrival time and the content of trucks and supply to minimise processing time using the FCFS policy. According to the results, this method can improve resource planning by 20 per cent. Temporary storage and unavailability of resources were not considered in this study. Following the same concept, Alpan (2010) presented a problem for the scheduling of cross-docking operations under uncertainties of inbound truck arrival time. The model aimed to minimise the total cost by using the best sequence of shipping trucks. They assigned the products to the shipping trucks following the first-in-first-out policy, which is the same as FCFS. The model, however, only considered one receiving door and one shipping door with infinite temporary storage space. The results illustrated that when no information was available on the arrival time of trucks, the total cost exhibited a significant increase (Larbi et al., 2011). Manual rules used to manage cross-dock operations give sub-optimal result, which according to Li et al. (2012) is inappropriate. Consequently, they developed an online scheduling and planning tool which reached optimal solutions for planning inbound trucks, the allocation of trucks to docks and the priority of jobs for forklifts to maximise the output. Research attempts to optimise cross-docking operations in three layers: planning, scheduling and coordination. The aim of the planning layer is minimising processing time, which consists of sequencing and allocation of containers. Processing time is the first uncertainty component, the late arrival time of trucks is the second uncertainty and the third one is resource management in a dynamic environment. To integrate the three layers, an event-based integrated optimisation model was developed by Ladier et al. (2014) with discrete event simulation. They aimed to evaluate the robustness of the IP model. In their study, arrival time, unloading time and processing time were uncertain. They used FlexSim software to develop the simulation model. In order to model unloading and to transfer time, they used triangular distribution and, for arrival time, exponential distribution. Temporary storage space was infinite. Resources inside the cross-docking centre were limited. The results showed that the model had reasonable robustness against uncertainties. To improve the previous model, Ladier et al. (2015) conducted further research and they considered uncertainties in available resources and tasks as well. Collaborative computing using a poll of heuristics can be used to find solution. Yin et al. (2015) researched collaborative vehicle routing and scheduling in cross-docking centres under uncertainties to minimise the makespan of cross-docking centres along the horizon. Three types of uncertainties were considered including vehicle failure, demand and arrival time. In order to solve the problem, a hyper-heuristic method was used which included collaborative computing and service rules. In this paper, the temporary storage and the process inside the cross-docking centre were not considered. Two-thirds of the operations in cross-dock centres are focussed on scheduling and assignment. Proper coordination of inbound and outbound activities can facilitate the smooth operation inside of the cross-dock centres. Fatthi et al. (2016) presented a study about the scheduling and assignment of trucks in an inbound phase to minimise the completion time on the inbound side. This model was based on real-time information with the number of receiving trucks, the content of trucks, arrival time of trucks and unloading time of trucks were dynamic. Conclusions and future research directions This literature review focusses on cross-docking operations under uncertainty. The selected studies addressed various issues in cross-docking at tactical and operational levels. Since the focus is on optimising operations with existing infrastructure and facilities, studies on strategic-level problems were excluded. The framework presented in Figure 4 illustrates the composition of uncertainties in cross-docking operations. Based on the results derived from reviewing the literature, several gaps have been identified. First, according to Boysen and Fliedner (2010), truck arrival time is often uncertain. The causes of this uncertainty include weather condition, traffic condition and truck failure. While several authors considered truck arrival time as uncertain, all these studies are far from applicable to the practical environment. A main limitation is yard management and the effects of uncertain arrival time and limited yard storage on cross-docking operations when there are deadlines for receiving and shipping trucks. Second, the availability of resources significantly influences cross-docking operations. Forklifts, conveyors and labour are the most common resources for unloading, transferring and loading the products. In the literature, some studies considered limited resources. However, the assumptions used in developing the model are unrealistic and cannot be used for practical solutions (Amini and Tavakkoli-Moghaddam, 2016;Fatthi et al., 2016;Ladier, 2014;Li et al., 2012;Shi et al., 2013;Soanpet, 2012;Zouhaier and Ben Said, 2017a, b). If temporary storage has unlimited capacity, the impact of resources limitation is not visible as all the extra products have to be moved to the temporary storage. If the storage capacity is not enough, the operations of the cross-docking centre will be disrupted. Therefore, Uncertainties in crossdocking operations models combining the limited temporary storage with limited resources capacity may provide meaningful solutions to optimise cross-docking operations. Finally, the departure time of trucks relies on arrival time, truck processing time and availability of resources inside the cross-docking centres. Previously, literature is limited to arrival time and due date for shipping trucks (Acar et al., 2012;Acar, 2004;Fatthi et al., 2016;Ladier, 2014;Ladier and Alpan, 2016b;Ladier et al., 2014;Walha et al., 2014). Future research can focus on developing integrated solutions through several steps. In the first phase, the process of optimising departure time and all related activities should be considered in the model. In the second phase, the impact of limited yard storage and temporary storage should be addressed. Finally, the effects of deadline on the overall performance of cross-docking centres and the capacity of trucks occupied by loaded products should be examined because in some cases, with deadlines on shipping trucks, the capacity which can be used may be less. Limited yard and temporary storage can increase the waiting time of shipping trucks and therefore increasing carbon emission. This is another gap that should be addressed in future research. The result of this review shows that the combination of uncertain factors and the effect of physical characteristics of cross-docking centres is one of the leading research areas which deserve more attention.
2020-01-30T09:15:14.586Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "a93e3033c0420c5f8a90d5962a471c557cf915c1", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/MSCRA-04-2019-0011/full/pdf?title=a-systematic-literature-review-on-uncertainties-in-cross-docking-operations", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "750149565548fa9b93879a263ff529b1e77bf0e5", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
55965773
pes2o/s2orc
v3-fos-license
Three-grating monolithic phase-mask for the single-order writing of large-period gratings An optimized achromatic high-efficiency monolithic phase mask is presented whose principle was demonstrated and described in reference [1]. The mask comprises three submicron period diffraction gratings at a single substrate side that create a purely single spatial frequency interferogram of large period. The optical scheme is that of an integrated Mach-Zehnder interferometer where all light circulation functions are performed by diffraction gratings. The paper describes the operation principle of the phase mask, the fabrication process INTRODUCTION Received April 30, 2010; published April 18, 2011 ISSN 1990-2573 Phase masks have long been used for the fabrication of Fibre Bragg Gratings (FBGs) [2]- [4].They offer the notable advantage over the more classical technique of creating an interferogram in a Mach-Zehnder interference scheme [5] that the period and the location of the created interferogram only depend on the period and location of the phase mask grating.The price to pay is the impossibility of adjusting the period of the interferogram which is imprinted in the phase mask corrugation.This is however a minor drawback in the writing of long gratings of constant period [6], and in the writing of a number of identical gratings.Another penalty is the restriction of the range of periods which can possibly be printed: the generated interferogram only has a single spatial frequency in the operation regime where the +1 st and -1 st orders at least and at most can propagate.Somewhat paradoxically, it is therefore more difficult to write a supmicron than a submicron period grating with a phase mask since the sensitivity spectrum of most photoresists is below 500 nm wavelength.Micrometer-scale period gratings are however needed as in IR spectroscopy and in grating scales for displacement sensors for instance where phase mask printing would be very advantageous industrially when associated with a write on the fly strategy [6]. The present paper describes an alternative phase mask structure permitting the generation of a high contrast, single spatial frequency interferogram of arbitrarily large period.It is an integrated monolithic Mach-Zehnder scheme operating in the spatial frequency domain similarly to an heterodyne frequency scheme: the phase mask comprises a central beam splitting grating, and two lateral grating zones at either side of the central corrugation and playing the role of the mirrors of the Mach-Zehnder interferometer.The spatial frequency of the generated interferogram is equal to twice the difference between the spatial frequencies of the central and lateral gratings.After the functionality of this dual-grating phase mask has been demonstrated with a laboratory prototype based on a simple photoresist technology [1], we will report here on its optimization and implementation in hard materials by means of e-beam writing and reactive ion etching. depth of the fused silica lines to have the maximum power distributed in the + and -1 st orders and minimizing the 0 th reflected order.Figure 2 is a schematic sketch of the optimized structure.Figure 3 shows the power efficiencies of the 0 th reflected orders and the +/-1 st reflected orders.Figure 3 top gives the power efficiency versus the line width and the curves at the bottom show the same versus the SiO 2 etched depth. The curves obtained by the modelling code show that for a SiO 2 line width of 170 nm the efficiency of the + and -1 st re- MODELLING 2 PHASE MASK PRINCIPLE The device described here is a monolithic phase mask composed of 3 diffraction gratings: a wave splitting grating of period  1 with two gratings of period  2 aside the latter. The functionality of such monolithic phase-mask was demonstrated by using photoresist gratings defined at the incidence side of a thick glass substrate [1].Despite its poor efficiency, the principle was demonstrated of splitting an incident beam with a transmission grating G 1 of period  1 directing its + and -1 st diffracted orders to two reflection gratings G 2 of period  2 that redirect both incoming beams via their -1 st diffraction order below the substrate where they overlap and interfere.In the present paper the principle is the same even though the configuration is different. An exposure laser beam enters the substrate through the air/SiO 2 interface and impinges onto grating G 1 of period  1 located on the opposite side of the substrate.This grating generates the + and -1 st orders in reflection.The line/space and the aspect ratio of this reflection grating are chosen to essentially cancel the 0 th reflected order.The diffracted orders propagate upwards in the substrate and are reflected at the opposite SiO 2 /Air interface under Total Internal Reflection.Then they impinge onto two diffraction gratings G2 of period  2 located at each side of G 1 .The function of gratings G 2 is to redirect the beams coming from G 1 outside the substrate using the -1 st transmitted order.The parameters of gratings G 1 and G 2 are chosen so as to optimize the overall diffraction efficiency.Their design will be described hereunder.Figure 1 is a diagram of the monolithic phase-mask element. Eq. ( 1) relates the grating periods  1 and  2 of gratings G 1 and G 2 and the period p of the fringes pattern generated in the overlap zones of the two beams.This phase mask is achromatic, i.e. the interferogram period p only depends on  1 and  2 . (1) The objective of the work presented here was to find out and to design the optimum configuration permitting to print a grating of 2 µm period.Such period is not printable by means of a standard phase mask because of the complexity of the multi-order generated interferogram and it was shown to be achievable by means of a Mach-Zehnder interferometer configuration [1].Periods  1 = 420 nm and  2 = 380 nm do generate an interferogram of 2 µm period according to Eq. ( 1), furthermore, G 1 gives rise at the 442 nm wavelength of an HeCd laser to diffracted beams experiencing total internal reflection at the opposite substrate side.A code based on the true-mode method [7] is used to find the parameters optimizing the diffraction efficiencies.Grating 1 of period  1 will be etched in the fused silica substrate and coated with aluminium.The code is used to optimized the width and the flected diffraction orders has a maximum (41.67 % of the incident power) while the 0 th reflected order admits a minimum (0.02%).Fixing now the SiO2 line width at 170 nm, Figure 3 top shows the tolerance on the etching depth with a maximum at 135 nm.The optimized parameters of G 1 are summarized in Table 1. Si3N4 layer with a line/space ratio of 105/380 = 0.28.Table 2 summarizes the parameters and gives the efficiency distribution in each diffracted orders generated by G2. The expected efficiency of the phase mask is given by the product of the efficiencies of the -1 st reflected order of G1 by that of the -1 st transmitted order of G2 multiplied by 2. With the optimal parameters the theoretical efficiency is: 0.417*0.879*2= 73.3%.Transmission gratings G 2 have been designed by using an incidence angle on the grating from the fused quartz substrate side equal to 45.8° according to the modelling result of G 1 .G 2 is a grating of period 380 nm with grating lines etched all through a high refractive index material (Si 3 N 4 , n = 2.07, k = 0.006 @ λ = 441.6 nm) with air under and between the silicon nitride lines.Figure 4 is a sketch of grating G 2 . The results of the modelling are presented in Figure 5, at the top the curves represent the power efficiency of the -1 st (transmitted -1 in Table 2) transmitted diffracted order versus the Si 3 N 4 line width, and at the bottom versus the thickness of the Si3N4 layer. PHASE MASK FABRICATION The most important feature in the dual-grating phase mask is the parallelism between all grating lines to prevent any Moiré effect in the interferogram.This is the reason for writing all three grating zones at the same substrate side to prevent any tilt resulting from unloading and reloading the substrate on the e-beam chuck.A silicon nitride layer was deposited by PECVD (Plasma Enhanced Chemical Vapour Deposition) on a thick (3 mm) fused silica substrate.The corrugations of G 1 and G 2 are not identical; G 1 is reflective whereas G 2 is a transmission grating.Therefore the samples must be prepared before the e-beam writing.G 1 will be etched in the fused silica substrate while the gratings G 2 will be etched in The curves of figure 5 show that the optimal parameters for G2 are: gratings lines with a depth of 210 nm all through the The grating lines are then written in the photoresist by a Vistec EBPG 5000+ electron beam nanowriter.The copper conductive layer is removed by wet etching, and the grating lines are revealed by using the appropriate development process.The lift-off technique is used to create the etch-mask for the RIE processes.A chromium layer (25 nm) is deposited by thermal evaporation at the bottom of the grooves and at the top of the photoresist layer.The sample is dipped in acetone for lifting off the chromium layer on the resist layer and keeping it on the other areas to be used as an etch-mask.The Si 3 N 4 layer is etched by CHF 3 -based RIE down to the silica surface (i.e.185 nm), a plastic foil protecting the G 1 site.The same operation is performed for the etching of the SiO 2 grating with the same etchant at 135 nm depth, the G 2 sites now being protected by a plastic foil.After the etching steps the chromium rests on top of the gratings is removed by an ammonium cerium (IV) nitrate and acetic acid based wet etching.The operations are summarized in Figure 6. The phase mask was characterized by means of SEM scans performed from the top and also from the edge after cleaving one of fabricated phase masks. The SEM pictures of G 1 and G 2 show that the corrugations have close to the desired aspect ratio.Figure 9 left is a side picture of grating G 2 etched in the Si 3 N 4 layer.The shape of the grating lines is very close to a rectangle with the optimized parameters.The phase masks were also characterized optically; the powers of the two output beams generated by the phase mask have been measured.The efficiency of the device is defined as the ratio between the power of the incident beam on G 1 and the sum of the output beams.The best sample shows an Grating G 1 is then made reflective by aluminium evaporation, a plastic foil being used to protect the rest of the substrate.A high vacuum (5*10 -7 mbar) and a sufficiently slow deposition rate (2 Å/s) are used to guarantee the correct filling of the grooves by aluminium.If the deposition speed is too high, the aluminium in the grooves grows in the form of aggregates.Figure 7 is a picture of the monolithic phase mask made on a 3 mm-thick substrate (1 inch diameter) with grating line length of 15 mm.efficiency of 71%, which is very close to the 73% expected by modelling the optimized gratings. CONCLUSION The proof of principle of a new monolithic dual-grating phase mask was made with the printing of a 2 µm period grating using a 441.6 nm wavelength laser beam.The range of grating periods is easily controlled by the splitter and recombiner gratings.By changing the period of G2, periods between 250 nm and 2 µm and over can be written with a pure single spatial frequency interferogram.The high efficiency obtained by using high index splitter and recombiner gratings to achieve more than 70% efficiency permits high writing speed dynamic exposure and reduced exposure time.By lengthening the grating lines of the monolithic phase mask up to the dimension which an e-beam nanowriter can write (5 inches for the Vistec used), large and long diffraction gratings of high spatial coherence can be written efficiently LONG GRATING PRINTING The phase mask shown in Figure 7, and exhibiting the best efficiency was implemented on a "write on the fly" bench [6]. Figure 10 describes its operation: the principle is based on Scanning Beam Interference Lithography (SBIL).The phase mask is illuminated by a collimated laser beam (He-Cd laser, λ = 441.6 nm) of 1.5 mm diameter under normal incidence.An interference pattern of 2 µm period is created in the overlap volume of the two diffracted beams.The lines of the interferogram are perpendicular to the displacement direction of the bench.A long substrate covered with photoresist (Shipley SPR 505) is placed on the translation bench, the distance between the phase mask and the substrate being adjusted so as to have the photoresist layer at about the middle of the beam overlap volume.The displacement of the stage is controlled by a grating displacement sensor of 1 µm period.The 500 nm period sinusoidal electri- cal signal is TTL converted.An electro-optical modulator cuts the laser beam each time the substrate travels by half a period (i.e. 1 µm) and lets it through for a second period half to prevent a washing out of the printed latent grating.Figure 11 is a picture of 4 diffraction gratings tracks of 2 µm period.They have been written on 3 inch substrates at a speed of 800 µm/s. FIG. 1 FIG. 1 Monolithic phase mask and related beam path. FIG. 2 FIG. 2 Sketch of the binary corrugation of grating G1 etched in the fused silica substrate and filled with aluminium. Fig. 6 Fig. 6 Phase mask grating fabrication steps.Top left: e-beam resist and copper layer deposition.Top right: Grating pattern in the resist layer.Bottom right: Lift-off of the chromium etch-mask.Bottom left: Etching in SiO2 (G1) and Si3N4 (G2). Fig. 7 Fig. 7 Picture of a monolithic phase mask made on a 1 inch substrate.Gratings G2 are visible at both sides of grating G1 covered by an aluminium layer. Fig. 8 Fig. 8 SEM picture of grating G1 before aluminium deposition Fig. 10 Fig. 10 Diagram of the "write on the fly" bench. Fig. 11 Fig. 11 Pictures of 2 μm period grating tracks written with the "write on the fly" bench. TABLE 1 Modelling parameters used for G1 and expected orders efficiencies. TABLE 2 Modelling parameters used for G2 and expected orders efficiencies.
2018-12-11T05:02:07.965Z
2011-04-18T00:00:00.000
{ "year": 2011, "sha1": "1e9fdbacf6f6bbd2265909b90fd720c4142a2968", "oa_license": "CCBY", "oa_url": "https://www.jeos.org/index.php/jeos_rp/article/download/11016s/698", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1e9fdbacf6f6bbd2265909b90fd720c4142a2968", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
260461053
pes2o/s2orc
v3-fos-license
Milnacipran: a unique antidepressant? Tricyclic antidepressants (TCAs) are among the most effective antidepressants available, although their poor tolerance at usual recommended doses and toxicity in overdose make them difficult to use. While selective serotonin reuptake inhibitors (SSRIs) are better tolerated than TCAs, they have their own specific problems, such as the aggravation of sexual dysfunction, interaction with coadministered drugs, and for many, a discontinuation syndrome. In addition, some of them appear to be less effective than TCAs in more severely depressed patients. Increasing evidence of the importance of norepinephrine in the etiology of depression has led to the development of a new generation of antidepressants, the serotonin and norepinephrine reuptake inhibitors (SNRIs). Milnacipran, one of the pioneer SNRIs, was designed from theoretic considerations to be more effective than SSRIs and better tolerated than TCAs, and with a simple pharmacokinetic profile. Milnacipran has the most balanced potency ratio for reuptake inhibition of the two neurotransmitters compared with other SNRIs (1:1.6 for milnacipran, 1:10 for duloxetine, and 1:30 for venlafaxine), and in some studies milnacipran has been shown to inhibit norepinephrine uptake with greater potency than serotonin (2.2:1). Clinical studies have shown that milnacipran has efficacy comparable with the TCAs and is superior to SSRIs in severe depression. In addition, milnacipran is well tolerated, with a low potential for pharmacokinetic drug–drug interactions. Milnacipran is a first-line therapy suitable for most depressed patients. It is frequently successful when other treatments fail for reasons of efficacy or tolerability. Introduction Depression is characterized by the presence of two core symptoms, depressed mood and anhedonia (decreased pleasure or interest). It is also accompanied, however, by a plethora of other signs and symptoms, such as changes in appetite and sleeping, fatigue and loss of energy, psychomotor agitation or retardation, feelings of worthlessness or inappropriate guilt, diminished ability to think or concentrate, and recurrent thoughts of death or suicide. 1 A relationship exists between the monoamine neurotransmitters in the brain, norepinephrine (NE) and serotonin (5-hydroxytryptamine, 5-HT) and the symptoms of major depressive disorder ( Figure 1). 2 Specific symptoms are thought to be associated with the increase or decrease of specific monoamines, implying the involvement of specific neurochemical mechanisms. Virtually all antidepressants increase the synaptic concentrations of 5-HT and/or NE by blocking the reuptake of one or both of these neurotransmitters. The archetypal tricyclic antidepressants (TCAs) block NE and 5-HT transporters to a varying extent depending on the particular compound. 3 Although they are among the most effective antidepressants available, 4 their poor tolerance and toxicity in overdose due to the involvement of other neurotransmitter systems make them difficult to use at effective doses. 5 The principal side effects of the TCAs are considered to be due essentially to their relatively high affinity for α 1 -adrenergic receptors, H 1 -histamine receptors, and muscarinic cholinergic receptors. 6 The selective serotonin reuptake inhibitors (SSRIs) which inhibit selectively the single neurotransmitter, 5-HT, are effective antidepressants. Although they have no affinity for α 1 -adrenergic receptors, H 1 -histamine receptors, and muscarinic cholinergic receptors, and are better tolerated than TCAs, 6 they have their own specific problems, such as aggravation of sexual dysfunction, interaction with coadministered drugs and, for many, a discontinuation syndrome. 7 In addition, some of them appear to be less effective than TCAs, with a number needed to treat for TCAs of about four compared with six for SSRIs in primary care. 8 The difference is most pronounced in more severely depressed patients. 9 In general, antidepressants achieve a response ($50% reduction in baseline depression score) in less than 70% of patients and remission (a complete absence of depressive symptoms) in less than 50%. Increasing evidence of the importance of NE in the etiology of depression 10 and the idea that "two actions are better than one" have led to the development of a new class of compounds that block the reuptake of both 5-HT and NE without the nonspecific, side effect-inducing receptor interactions of TCAs. This class, the serotonin (5-HT) and NE reuptake inhibitors (SNRIs) comprises venlafaxine (and its active metabolite, desvenlafaxine), duloxetine, and milnacipran. 11 By definition, the SNRIs inhibit both 5-HT and NE transporters. There is, however, considerable difference in their selectivity for the two transporters (Table 1 and Figure 2). Venlafaxine has a much greater affinity for the 5-HT transporter than for the NE transporter. At low doses, it probably inhibits almost exclusively the 5-HT transporter, acting like a SSRI, with significant NE reuptake inhibition only occurring at higher doses. Duloxetine has a more balanced affinity, but is still more selective for the 5-HT transporter. Milnacipran is the most balanced SNRI, and some studies have even found it to be slightly more There is frequently confusion between the terms "selectivity" and "potency", which refer to two different entities. Potency reflects the concentration of the antidepressant inhibiting 50% of uptake or binding to the transporter, depending on the technique used. Thus from Table 1 it can be seen that duloxetine is 154 times more potent than milnacipran at blocking the binding of 5-HT to the transporter (ie, 154 times more milnacipran is required to obtain the same effect). To block the binding of NE to its transporter, duloxetine is about 27 times more potent than milnacipran. If absorption, metabolism, distribution, brain penetration and distribution, and elimination were identical for the two drugs, it would be necessary to give 154 times more milnacipran than duloxetine to achieve the same effect on 5-HT reuptake and 27 times more milnacipran to have the same effect on NE reuptake. Of course the kinetic parameters vary considerably between these two compounds, and certain parameters are impossible to determine in humans (eg, brain penetration) and hence this calculation remains purely theoretical. The selectivity of an antidepressant is the ratio of the potency values for NE and 5-HT reuptake inhibition (or inhibition of binding to the transporter). As shown in Table 1, milnacipran has a selectivity close to 1, duloxetine close to 10 (in favor of 5-HT), and venlafaxine close to 30. Thus, in a dose titration, when milnacipran starts to inhibit 5-HT reuptake, it also starts to inhibit NE reuptake; when it inhibits 5-HT reuptake by 50%, it also inhibits NE reuptake by approximately 50%, and so on. Increasing the dose does not alter the "nature" of the effect. At all doses it has an equivalent effect on the two neurotransmitters systems. In contrast, a dose titration with venlafaxine will give (eg, at 75 mg) an initial inhibition of 5-HT reuptake with no inhibition of NE uptake. Only at much higher doses (eg, 200-250 mg) is there any significant inhibition of NE reuptake, but at this dose the inhibition of 5-HT reuptake is already 100%. Thus, titrating venlafaxine changes the "nature" of its effect from a SSRI to a SNRI as the dose is increased. The situation with duloxetine is intermediate between milnacipran and venlafaxine. There are some indications that the mechanism of milnacipran may be more complex than a simple action at the monoamine transporter, and thus is different from the other SNRIs. A study assessed the effect of milnacipran on the firing activity of dorsal raphe 5-HT neurons and locus coeruleus NE neurons using extracellular unitary recording in rats. 13 The authors concluded that milnacipran had profound effects on the function of 5-HT and NE neurons, but that the mechanism by which 5-HT neurons regained their normal firing during milnacipran treatment appears to implicate the NE system. In a more recent study, 14 duloxetine and venlafaxine were found to increase 5-HT levels in the brainstem and 5-HT terminal areas, whereas milnacipran increased 5-HT levels only in the brainstem. Significant reductions in 5-HT turnover were observed in various forebrain regions, including the hippocampus and hypothalamus, after treatment with duloxetine or venlafaxine, but not after milnacipran. In addition, venlafaxine and duloxetine significantly increased dopamine (DA) levels and decreased DA turnover in the nucleus accumbens, whereas milnacipran only increased DA levels in the medial prefrontal cortex. The authors concluded that the effects of milnacipran were unique because it caused increases in DA in the medial prefrontal cortex and in 5-HT in the midbrain without any changes in monoamine turnover. They suggested that milnacipran might exert its therapeutic effects by activating the dopaminergic system in the medial prefrontal cortex, and that milnacipran was in this respect different from duloxetine and venlafaxine. Some notable characteristics of milnacipran In addition to its balanced action on the two monoamine transporters, preclinical and clinical studies have shown that milnacipran possesses certain characteristics which are relatively unusual in an antidepressant. Milnacipran has no active metabolites. Unlike the majority of antidepressants, milnacipran is only metabolized to a very minor extent, with most of the administered drug being excreted in the urine either unchanged or as the inactive glucurono-conjugate. 15 Whereas most antidepressants interact with cytochrome P450 enzymes as inhibitors, inducers, or substrates, 16 milnacipran has been shown to be essentially devoid of interactions with any cytochrome P450 enzyme. 17 In addition, milnacipran binds to only a very limited extent (13%) to serum albumin. 15 Milnacipran, therefore, has a low risk of pharmacokinetic drug-drug interactions. Depression is associated with sexual disturbances, including decreased libido, anorgasmia, and erectile problems. Since introduction of the SSRIs, it has become apparent that aggravation of sexual dysfunction is a frequent problem for patients taking these drugs, with some studies reporting rates as high as 75%. 18 Sexual dysfunction caused by SSRIs is related to stimulation of 5-HT 2 and 5-HT 3 receptors but its origin is complex and probably involves other systems as well. 19 Venlafaxine 20 and duloxetine 21,22 also exacerbate sexual dysfunction at frequencies similar to those seen with SSRIs. A study using the Sexual Function and Enjoyment Questionnaire 23 showed no aggravation of sexual disturbance with milnacipran, which improved sexual function in parallel with improvement in other symptoms of depression. Following abrupt discontinuation, most SSRIs, and paroxetine in particular, produce a number of adverse events, including dizziness, nausea, headache, paresthesia, vomiting, irritability, and nightmares. 24 Venlafaxine and duloxetine produce similar discontinuation emergent adverse events. 25,26 A post hoc analysis of patients abruptly withdrawn from paroxetine or milnacipran as part of a double-blind comparative study 27 showed that paroxetine produced significantly more discontinuation emergent adverse events than milnacipran. In addition, the nature of the adverse events differed between the two antidepressants, with patients withdrawn from paroxetine showing the classical symptoms of dizziness, anxiety, and sleep disturbance (insomnia and nightmares), while those withdrawn from milnacipran showed only increased anxiety. However, some discontinuation symptoms have been reported, and good clinical practice and regulatory authorities always recommend gradual discontinuation from any psychotropic drug. Certain antidepressants are associated with clinically significant weight changes. In particular, some TCAs including amitriptyline, certain SSRIs including paroxetine, and other antidepressants, such as mirtazapine, are frequently associated with significant weight gain. 28 Data from a wide range of clinical trials 29 have shown that 82% of patients taking milnacipran 100 mg/day for 3 months or more have no clinically significant weight change (defined as .5% of body weight). Of the remainder, 10% had clinically significant weight loss, while 8% had clinically significant weight gain. Comparison of milnacipran with TCAs and SSRIs Seven randomized, double-blind trials with similar designs have compared the efficacy and tolerability of milnacipran and TCAs in patients with major depression. At a dose of 100 mg/day the response rate with milnacipran (64%) was comparable with that of the TCAs (67%). In contrast with the TCAs, milnacipran was very well tolerated by patients. 30 A meta-analysis of studies comparing milnacipran at 100 mg/day with the SSRIs, fluvoxamine (200 mg/day) and fluoxetine (20 mg/day), in moderately to severely depressed hospitalized patients, 31 reported significantly more responders (64%) with milnacipran than with the two SSRIs (50%, P , 0.01) and a significantly higher remission rate (38.7% versus 27.6%, P , 0.04). Another study, published subsequent to this meta-analysis, compared milnacipran with paroxetine 20 mg/day in less severely depressed outpatients, and reported similar remission rates for the two antidepressants. 32 Table 2 summarizes two studies, each comparing milnacipran with a SSRI, one in moderately to severely depressed hospitalized patients, 33 and the other in less severely depressed outpatients. 34 The two studies, which investigated two different SSRIs in different treatment settings, cannot be 27 Milnacipran: a unique antidepressant compared directly. Nevertheless, it is interesting to note that milnacipran was associated with significant improvement in both studies. In contrast, the SSRIs led to an improvement comparable with that of milnacipran in the study of less severely depressed patients, but not in the study of patients with severe depression. Unlike milnacipran, SSRI treatment did not achieve the additional reduction in depression score needed in the severely depressed patients to reach response. Clearly this analysis is only indicative and the severity of depression was not the only factor that differed between the studies. Nevertheless, the results are compatible with other data 34 suggesting that SSRIs may have a limited capacity for improving depressive symptoms, which becomes more evident in more severely depressed patients. In the study comparing milnacipran with paroxetine 20 mg/day, 32 the overall efficacy of the two antidepressants was similar. However, milnacipran was significantly better than paroxetine in the subgroup of patients scoring maximally at baseline on the retardation-slowness of thought and speech, impaired ability to concentrate, and decreased motor activity factor (item 8) on the Hamilton Depression Rating Scale (HDRS, Figure 3). This is compatible with the finding that reduced noradrenergic neuronal tone is related to psychomotor retardation. 35 Furthermore, the selective NE reuptake inhibitor has been shown to improve psychomotor retardation systematically, even when other symptoms were not improved. 36 These data suggest that depressed patients with marked psychomotor retardation may benefit particularly from treatment with milnacipran. In studies comparing milnacipran with SSRIs, both compounds are generally well tolerated. The most frequent adverse event with both milnacipran and SSRIs is nausea, although this occurred less frequently with milnacipran. 31 As would be expected, adverse effects that are probably related to noradrenergic stimulation, such as dry mouth, sweating, and constipation, occur more frequently with milnacipran than with SSRIs, although the differences are not as large as might be expected. 31 A meta-analysis of all published studies comparing milnacipran with SSRIs 37 concluded that patients on milnacipran had the same probability of obtaining a clinical response as those on SSRIs. As with many meta-analyses, however, this global analysis grouped certain atypical studies which should have been analyzed separately. For example, one study 38 comparing milnacipran with fluoxetine used once-daily dosing for both of the antidepressants. In view 28 Kasper and Pail of the half-life of milnacipran (7-8 hours) this protocol was inappropriate given that twice daily dosing of milnacipran is recommended. In two studies, 39,40 each comparing two doses of milnacipran with a single dose of a SSRI, the metaanalysis inappropriately compared each dose of milnacipran with the SSRI, using the single SSRI group twice, thus giving excessive importance to the SSRI groups. Most importantly, however, the analysis combined, without distinction, data from a study in severely depressed hospitalized patients 33 (baseline HDRS . 32) with data from studies in mildly depressed outpatients 32,41 (baseline HDRS , 24). Another analysis of studies comparing milnacipran with SSRIs 42 concluded that, on the basis of all available evidence, milnacipran, like duloxetine and mirtazapine, had "probable superior efficacy" compared with SSRIs. Comparison of milnacipran with other SNRIs With the exception of the study described in this supplement 43 which showed equivalent efficacy of milnacipran and venlafaxine at high doses, no studies comparing milnacipran with other SNRIs have been carried out. However, all three SNRIs have been compared with SSRIs, and comparisons of venlafaxine with SSRIs and milnacipran with SSRIs have been subjected to meta-analyses which have been juxtaposed for comparison. 11 A similar level of efficacy for the SSRIs was seen across all of the studies. Milnacipran, as well as venlafaxine, produced remission rates about 10% higher than those of the SSRIs. 11 More recently a meta-analysis of 93 trials comparing a dual-action antidepressant (venlafaxine, milnacipran, duloxetine, mirtazapine, mianserin, or moclobemide) with one or more SSRIs has been published. 44 This analysis, involving over 17,000 patients, confirms the overall superiority of the dual-action antidepressants compared with the SSRIs (Figure 4). In addition, this meta-analysis shows a similar level of efficacy for all of the dual-action antidepressants, with the exception of duloxetine which, in this analysis, was less effective than the other dual-acting agents. Thus, it would seem reasonable to conclude that there is a comparable level of antidepressant efficacy for milnacipran and venlafaxine and probably duloxetine, although further data is required for the latter. Similarly, in the absence of direct comparative studies between the SNRIs it is not possible to draw any firm conclusions on comparative tolerability. However, in the various studies comparing an SNRI with SSRIs, the side effect profiles of all three SNRIs show qualitative differences in comparison with those of the SSRIs. The most common adverse effects with the SSRIs are nausea, vertigo/dizziness, dry mouth, and insomnia. Only dry mouth appears to be 29 Milnacipran: a unique antidepressant systematically more common with SNRIs than with SSRIs. The dry mouth experienced with SNRIs is of noradrenergic origin and is analogous to that encountered during stress. The overall frequency of adverse events with milnacipran appears to be less than for venlafaxine and duloxetine. 11 However, direct head-to-head comparisons are needed before any firm conclusions can be drawn. Fatalities have been reported due to overdose of venlafaxine alone or in combination with other compounds, 45,46 often following serotonin syndrome. Fatal toxicity index (deaths caused by a drug/million prescriptions) is a very crude measure of drug toxicity and should be interpreted with caution. Nevertheless, fatal toxicity studies from England, Scotland, and Wales have provided some interesting data. Deaths due to acute poisoning by a single antidepressant have been compiled for the period 1993-1999. 47 While the SSRIs caused between 1-3 deaths/million prescriptions, venlafaxine had an index of over 13 deaths/million prescriptions. A subsequent analysis for the period 1998-2000 found similar results (1-3 and 13 deaths/million prescriptions, for SSRIs and venlafaxine, respectively). 48 Milnacipran appears not to cause any particular concern in overdose. Patients have absorbed up to 2.8 g (one month's supply at the recommended dose) without any major effects other than sedation. In particular, no cardiovascular complications have been recorded. No fatalities have been recorded with milnacipran alone. 49 At the present time, no cases of lethal overdose with duloxetine have been published. Efficacy of milnacipran in preventing recurrent depressive episodes Major depression is generally a recurrent disorder and 75%-80% of patients experience repeated episodes. 50 There is also evidence that the risk of recurrence tends to increase with each successive episode. 50,51 The role of an efficient antidepressant is therefore not only to get patients well, but to keep them well. A recurrence prevention study with milnacipran consisted of a six-week open treatment period followed by a continuation phase of 18 weeks for the responders. Patients with a sustained remission at the end of this 24-week period were randomized to continuing treatment with milnacipran or to placebo under double-blind conditions and followed for a further 12 months. There was significantly less recurrence of depressive episodes in milnacipran-treated patients, as determined by Kaplan-Meier analysis of the cumulative probability of recurrence. 52 By the end of the 12-month doubleblind phase, 16.3% of patients treated with milnacipran had relapsed compared with 23.6% of patients on placebo (P , 0.05). The level of tolerability and safety of milnacipran during this 18-month study was equivalent to that reported in relapse/recurrence prevention studies with SSRIs. 53,54 Milnacipran: a unique antidepressant? Whether or not the profile described above justify referring to milnacipran as a unique antidepressant, it is clear that this agent has a distinct combination of characteristics. It is the only SNRI with a balanced (1:1) activity on NE and 5-HT reuptake inhibition. Its efficacy in mild, moderate, and severe depression and a good overall tolerability are combined with a low risk of causing pharmacokinetic drug-drug interactions, sexual dysfunction, minimal effects on body weight in normal-weight patients, and a lack of toxicity in overdose. This particular profile qualifies milnacipran as a first-line antidepressant for many depressed patients. Milnacipran may be particularly well-suited for low-energy, slowed-down patients. Patients who have been withdrawn from SSRIs or other antidepressants due to lack of efficacy or intolerance may find milnacipran to be an effective therapeutic option. Note that this overview highlights what we consider to be the most interesting and relevant points of the profile of milnacipran and does not claim to be exhaustive. Approved indications and safety recommendations may vary between countries, so prescribers should check on the summary of product characteristics in their own country. Neuropsychiatric Disease and Treatment Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/neuropsychiatric-disease-and-treatment-journal Neuropsychiatric Disease and Treatment is an international, peerreviewed journal of clinical therapeutics and pharmacology focusing on concise rapid reporting of clinical or pre-clinical studies on a range of neuropsychiatric and neurological disorders. This journal is indexed on PubMed Central, the 'PsycINFO' database and CAS, and is the official journal of The International Neuropsychiatric Association (INA). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2018-04-03T00:57:30.569Z
2010-08-25T00:00:00.000
{ "year": 2010, "sha1": "322f7bc2ea905dacedd2a9281b1a84d3ecb9e634", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=7542", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09087c98847b4751c3e4019153f6d9c914958a70", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
234049252
pes2o/s2orc
v3-fos-license
Influence of Two Mass Variables on Inertia Cone Crusher Performance and Optimization of Dynamic Balance : Inertia cone crushers are widely used in complex ore mineral processing. The two mass variables (fixed cone mass and moving cone mass) affect the dynamic performance of the inertia cone crusher. Particularly the operative crushing force of the moving cone and the amplitude of the fixed cone are affected, and thus the energy consumption of the crusher. In this paper, the process of crushing steel slag is taken as a specific research object, to analyze the influence of two mass variables on the inertia cone crusher performance. A real-time dynamic model based on the multibody dynamic (MBD) and the discrete element method (DEM) is established. Furthermore, the influence of the fixed cone mass and moving cone mass on the operative crushing force, amplitude and average power draw are explored by the design of simulation experiments. The predictive regression models of inertia cone crusher performance are obtained using response surface methodology (RSM). After increasing the fixed cone mass, the optimized amplitude, average power and moving cone mass are decreased by 37.1%, 33.1% and 10%, respectively, compared to without the adjustment. Finally, a more effective dynamic balancing mechanism of inertia cone crusher is achieved, which can utilize the kinetic energy of a balancer, and minimize the mass of the fixed and moving cone. The fixed cone mass and moving cone mass of a balancing crusher are decreased by 78.9% and 22.8%, respectively, compared to without the balancing mechanism. Introduction Inertia cone crushers are widely used in the secondary and tertiary crushing stages of complex ore processing, such as the comprehensive recovery of steel slag [1,2]. A mantle rotates and swings in the crushing chamber, which is due to an eccentric vibrator transferring the rotational motion to the main shaft. As it flows downward between the mantle and concave, the ore particle is crushed several times. The total crushing force for the inertia cone crusher is provided by the eccentric vibrator and mantle. As the concave is located above several rubber absorbers, the concave can move and roll in three-dimensional space. Therefore, the operative crushing force is less than the theoretical force, and the energy consumption increases [3]. The subgroup including the concave and subsidiary components is defined as the fixed cone, and the subgroup including the mantle and eccentric vibrator is defined as the moving cone. At the condition of keeping other parameters invariable, the fixed cone mass and moving cone mass have a great impact on the operative crushing force, amplitude of fixed cone and energy consumption, whereas the increase in moving cone mass can increase the theoretical crushing force and amplitude directly. The decrease of fixed cone mass can decrease the operating crushing force and increase energy consumption indirectly. Here, from a manufacturer's perspective, how to determine the two mass parameters is a key problem. Furthermore, at the guarantee of reasonable crushing force achievement rate and energy consumption, minimizing the mass of the fixed and moving cone is one of the main ways to reduce manufacturing cost. Savov et al. [4] and Xia et al. [5] contributed to an initial mathematical modeling of the crushing force achievement rate. However, the models do not have the ability to take into account the effect of ore particles on the crusher. Additionally, no research regarding the mass of inertia cone crusher optimization has yet been published to our knowledge. Cleary et al. [6] and Andre et al. [7] studied the effect of feed properties (material strength, particle friction) and machine controls (CSS, speed) on cone crusher performances (particle distribution, throughput, power) based on the particle replacement model (PRM) in the software EDEM. Chen et al. [8] took the throughput and crushing force as the multiobjective optimization, and studied the effect of the parameters of the crushing chamber and speed on gyratory crusher performances based on the bonded particle model (BPM). These above studies can provide the main ways to optimize the variables (operation, chamber shape and feed properties). However, these simulation methods do not have the ability to take into account the effect of inertia parameters (fixed cone and moving cone mass) on operation performance (crushing force, amplitude and average power) for an inertia cone crusher. Cheng et al. [9] provided a powerful method whereby the coupling multi-body dynamics (MBD) [10,11] and discrete element method (DEM) [12,13] simulate the crushing behavior response for an inertia cone crusher. Currently no research using coupled MBD-DEM dynamic models for an inertia cone crusher has been published, except for our publication [9]. Barrios et al. [14] and Chung et al. [15], regarding coupled MBD-DEM models, provided useful attempts for high-pressure grinding rolls (HPGR). Furthermore, at the same industrial scale, the mass of the inertia cone crusher is much heavier than other cone crushers, such as hydraulic crushers and spring crushers. The reason is that the eccentric vibrator leads to the violent vibration of the crusher and the increase of energy consumption. Znamenskll et al. [16], regarding the dynamic balance of an inertia cone crusher, put forward a preliminary design. However, the dynamic balance design neither completely counteracts the excitation force nor utilizes the kinetic energy of the balancer. Ren et al. [17] utilized the kinetic energy of the balancer in the design. Nevertheless, the dynamic balancing mechanism is unstable and cannot completely counteract the excitation force, so it is difficult to widely use it in industry. As such, this paper takes the process of crushing steel slag as the analysis object, and the crushing force achievement rate, amplitude of the fixed cone and average power of the drive shaft are explored by the MBD-DEM coupling method. Moreover, we propose an approach in which the use of response surface methodology (RSM) and analysis of variance (ANOVA) optimizes the fixed and moving cone mass to achieve the optimum operation performance. The results show that the operation performance is greatly improved by increasing the fixed cone mass, which increases the manufacturing cost. Finally, in order to reduce the manufacturing cost for manufacturers and the running cost for users, a more effective dynamic balancing mechanism of inertia cone crushers is achieved. Such a mechanism not only utilizes the kinetic energy of the balancer, but also minimizes the mass of the fixed cone and the moving cone. Inertia Cone Crusher Theory The inertia cone crusher consists of a main frame, a concave, a mantle, rubber absorbers, a main shaft and an eccentric vibrator. The ore particles fall from the feed chute to the crushing chamber; then, they are squeezed by the mantle and other particles. Finally, the particles are discharged from the discharge zone. Figure 1 shows a vertical cross-section and a simplified MBD model for an inertia cone crusher, where α and θ are the mantle angle and the nutation angle, respectively. l0, l1 and l2 are the axis of the crusher, concave and mantle, respectively. B1 is the fixed cone, which is fixed to the main frame. B2 is the moving cone, which is fixed to the main shaft. O1 is a spherical joint; O2 is a spherical joint between B1 and the globe bearing (B4); O3 is a cylindrical joint between B2 and the eccentric vibrator (B3); O4 is a planar joint between B3 and B4; O5 is a ball-pin joint between B3 and the connecting shaft (B5); O6 is a universal joint between B5 and the drive shaft (B6); O7 is a revolute joint between B6 and the ground (B0). Crusher Dynamic Model Using MBD The generalized coordinate qi of the rigid body Bi is shown in Equation (1). where ri is the matrix of independent position variables (xi, yi, zi), and Λi is the matrix of independent angle variables (ψi, θi, φi). According to [18], the kinetic formula for Bi without any joint equations is derived in Equation (2). where q̈i is the generalized acceleration of qi, and Mi and Qi can be expressed, respectively: where mi and Ji are the mass and the inertia matrix for Bi, respectively. E and 0 denote the identity and null matrix, respectively. Fia and Tia denote the equivalent absorber forces and torques, Fif and Tif denote the joint friction forces and torques, Fig is the gravity of Bi, Fip and Tip denote the equivalent particle forces and torques, and Di is the coordinate matrix. The formula of MBD for the inertia cone crusher without any joints can be shown as: 1 2 3 4 5 6 T T T T T T T 1 2 3 4 5 6 diag( , , , (4) where q̈ is the generalized accelerations of the multi-body system. The joint Oj (j = 1,2,…,7) equations and driving motion constraints for inertia cone crusher are expressed as:  According to Equation (5), the velocity and acceleration equation can be expressed as: where Φq and Φt is the Jacobian matrix for Φ(q,t). According to the Lagrange multiplier λi (i =1,2,…,26), the formula of MBD for inertia cone crusher is derived in terms of the Lagrange multiplier matrix λ and generalized coordinate matrix q, and can be shown as: BPM Theory The BPM consists of bonding a packed distribution of particles, forming a breakage cluster [19]. As shown in Figure 2, a parallel bonding beam is created between each particle in contact, so the forces (torques) on the bonding beam are calculated from Equation (8) and Equation (9). BPM has been used in simulating the crushing behavior of particles [13,20,21] where Fbn, Tbn, Fbt and Tbt denote the bond normal force, normal torque, bond tangentialdirected force and torque, respectively. kbn and kbt are the normal and tangential stiffness per unit area. A and J are the area of the parallel bond cross-section and polar moment of inertia, respectively. where Rb, Vn, Vt, ωn, ωt, and δt are parallel bond radius, normal velocity, tangential velocity, normal angular velocity, tangential angular velocity and time step, respectively. The maximum normal and tangential stress are calculated according to Equation (10). where σbc and τbc are critical normal and critical shear strength, respectively. If the maximum stress exceeds the critical strength, the bond beam will disappear. The particle interaction depends on the Hertz-Mindlin contact model [22]. BPM Calibration The feed material is a steel slag which has a complex shape and size distribution in our industrial experiments of the inertia cone crusher. The feed particle size range is 50 mm to 70 mm, and a 3D model of slag is shown in Figure 3. When using BPM for simulating the crushing behavior of steel slag, the key is to make sure that the relevant DEM parameters of the particle are calibrated. Therefore, the Hertz-Mindlin contact model parameters were determined by the uniaxial compression deformation and repose angle test, as shown in Figure 4. Table 1 shows the contact parameters. Figure 3. The schematic illustration of the slag particle using DEM: (a) BPM of the slag particle is formed by the EDEM software, and (b) a realistic slag shape is used creating the packing structure of a normal distribution. The BPM-relevant parameters were determined by the Brazilian test and simulation experiment, as shown in Figure 5. The calibration method had been described in detail in our publication [13]. By comparing the tensile strength simulation with the experiment values (10.6 MPa), we directly provide the BPM parameters that are shown in Table 1. The Solution of the Coupled MBD-DEM Model in Software Combining with Sections 2.1-2.3, Figure 6 shows the simulation flowchart of the inertia cone crusher using the MBD-DEM coupling method. The MBD of the geometries is calculated by RecurDyn software, and the DEM of the particles is calculated using EDEM. As slag clusters flow downward between the mantle and the concave, the size becomes successively smaller in the software EDEM, as shown in Figure 7. The operative performances (the operative crushing force, amplitude and average power) for the inertia cone crusher are obtained by the software RecurDyn. Influencing Factors and Performance Goals Basing on Section 2 and the previous publications [3][4][5], the fixed cone mass (FM) m1 and the moving cone mass (MM) m2 are taken as the influencing factors. The operative crushing force Fo is less than the theoretical crushing force Ft under normal operating conditions. The theoretical crushing force of the inertia cone crusher is provided by the moving cone when the fixed cone is not moving. As such, the crushing force achievement rate ηf is taken as one of the performance goals, according to Equation (11). when the inertia cone crusher works, the fixed cone will move horizontally and deflect around a rotation point. As such, the amplitude of the rotation point displacement and the deflection angle can give a good indication for the vibration characteristics of the crusher. The theoretical and experimental results indicate that the amplitude As and deflection angle γd have a significant positive correlation [22,23]. Ignoring the deflection angle γd, the amplitude As is taken as a performance goal. Besides this, the two mass variables have a great impact on the energy consumption of a crusher, and the average power draw of drive shaft Pa is taken as a performance goal. Based on the response surface methodology (RSM) [24], the influence of the two mass variables on the performance goals is modeled with the SPSS software. The corresponding predictive regression models can be expressed as Equation (12), and the simulated experiment scheme is listed in Table A1. where fη, fA, and fP are the predictive models of crushing force achievement rate, amplitude and average power, respectively. Because the driving speed is determined by the productivity, the driving speed of the crusher (model: GYP1200) should not exceed 550 rpm. As such, the driving speed is set to 550 rpm in this paper. Crushing Force Achievement Rate Analysis The influence of FM and MM on the crushing force achievement rate, and the prediction regression curves are shown in Figure 8. The theoretical force Ft is only determined by MM, and Figure 8a shows the influence of MM on the theoretical force Ft. Figure 8b shows the influence of FM on the crushing force achievement rate under three kinds of MM (3.5 t, 4.5 t, and 5.9 t), which indicates that the crushing force achievement rate ηf increases significantly with increasing FM. Figure 8c shows the influence of MM on ηf under three kinds of FM (15 t, 25 t, and 39 t), which indicates that ηf gradually decreases with increasing MM. At the 0.05 significance level, we can find that the influence of FM and MM on ηf is significant using the quadratic regression function. (d) Figure 8d shows the influence of the interaction between FM and MM on ηf using the response surface methodology (RSM), and the corresponding results of ANOVA for predictive regression models are shown in Table 2. The prediction regression model of the crushing force achievement rate is expressed as: The corresponding ANOVA for predictive regression coefficients is shown in Table A2. At the 0.05 significance level, we can find that the linear term of MM has the weakest impact on ηf, and the p-value is more than 0.05. As such, the linear term of MM is ignored in the prediction regression model of crushing force achievement rate. As FM increases or MM decreases, ηf can increase. This is because the increase in FM or the decreases in MM will decrease the amplitude of the fixed cone, and increase the eccentric distance of the moving cone. However, when ηf is over 90%, the increase in ηf is very small with increasing FM, and for the different moving cone mass, the fixed cone mass required to reach 90% (ηf) is different. Amplitude Analysis The influence of FM and MM on the amplitude of the fixed cone, and the prediction regression curves, are shown in Figure 9. Figure 9a shows the influence of FM on the amplitude under three kinds of MM (3.5 t, 4.5 t, and 5.9 t), which indicates that the amplitude As decreases with increasing FM. Figure 9b shows the influence of MM on the amplitude under three kinds of FW (15 t, 25 t, and 39 t), which indicates that the amplitude As increases with increasing MM. Figure 9c shows the influence of the interaction between FM and MM on the crushing force achievement rate ηf using RSM, and Table 3 indicates that the prediction model of As has a good fitness using the quadratic regression function with the 0.01 significance level. The prediction regression model is shown in Equation (14). It can be found that the quadratic term of MM has the weakest impact on amplitude, and the p-value is more than 0.01 from Table A2. So, the quadratic term of MM is ignored in the predictive regression model of amplitude. As FM increases, the amplitude As can decrease, and the fixed cone will be more difficult to move for a constant theoretical crushing force. Comparing Figures 8d and 9c, it can be found that when the ηf is over 90%, the decrease in amplitude As is very small with the increasing FM. However, with the change of MM, the amplitude As will change significantly for a constant FM. Average Power Analysis The influence of FM and MM on the average power draw, and the prediction regression curves, are shown in Figure 10. Figure 10a shows the influence of FM on the average power under three kinds of MM (3.5 t, 4.5 t, and 5.9 t), which indicates that the average power Pa decreases with increasing FM. Figure 10b shows the influence of MM on Pa under three kinds of FM (15 t, 25 t, and 39 t), which indicates that the average power Pa increases significantly with increasing MM. Figure 10c shows the influence of the interaction between FM and MM on Pa using RSM, and Table 3 indicates that the prediction model of average power Pa has a good fitness using the quadratic regression function with the 0.05 significance level. The prediction regression model is shown in Equation (15). Table A2 shows that the quadratic term of MM has the weakest impact on the average power Pa, and the p-value is more than 0.05. Comparing Figures 9 and 10, we can see that as FM and MM increase, the variation in average power Pa is similar to that of the amplitude As. The reason for this is that as As increases, the energy consumption of the rubber absorbers can increase, and the kinetic energy of the steel slag particles also increases, resulting in the increase in friction heat energy. Optimization Results Combining with the above sections, it can be seen that when the crushing force achievement rate ηf is over 90%, the decrease in amplitude As and average power Pa are very small with increasing FM. Table 1 shows the FM is 20 t and the MM is 5.5 t for the industrial inertia cone crusher (model: GYP1200), and the operative crushing force Fo is 697.92 kN from the simulated experiment (Table A1). If the prediction values of Fo and ηf are 697.92 kN and 90%, the MM is set to 4.95 t (Figure 8a). According to Equation (13), the optimized fixed cone mass is 30.54 t. According to Equations (14) and (15), the optimized amplitude As and average power Pa are 5.29 mm and 125.38 kW, respectively. Compared with the simulated experiment (Table A1), it can be seen that the optimized amplitude, average power and MM are decreased by 37.1%, 33.1% and 10.2%, respectively. Finally, we can see that the decrease in amplitude can effectively decrease the average power, and increase the crushing force achievement rate. However, the optimized FM is increased by 52.7%, so the optimized mass of the inertia cone crusher is about three times as much as the hydraulic cone crusher or spring crusher for the same industrial scale, which increases the manufacturing cost. In this paper, we design a more effective dynamic balancing mechanism for the inertia cone crusher, which decreases the amplitude and minimizes the mass of the inertia cone crusher. Mechanics Principle of Dynamic Balancing The total crushing force of the single exciter GYP-type inertia cone crusher is provided by the eccentric vibrator and the mantle, as shown in Equation (16): where Fcr is the total crushing force, Fice is the equivalent centrifugal force generated by the mantle, and Fucve is the equivalent exciting force generated by the eccentric vibrator. The inertia cone crusher with the dynamic balancing mechanism is shown in Figure 11a. The dynamic balancing mechanism is mainly composed of a balancer and feedback mechanism. A balancer is added to the GYP-type inertia cone crusher, which can similarly rotate on the opposite side of the vibration exciter. In this way, the vibration of the crusher can be minimized to decrease the mass of the fixed and moving cone. Through a feedback mechanism to increase the crushing force, the kinetic energy of the balancer can be utilized efficiently. The planar layout of forces acting on the mantle cone is shown in Figure 11b. Fic, Fucv and Fbec are the centrifugal forces generated by the mantle, eccentric vibrator and balancer, respectively. where mic, mucv and mbec are the mass of the mantle, eccentric vibrator and balancer, respectively; eic, eucv and ebec are the eccentric distance of the mantle, eccentric vibrator and balancer, respectively; ω is the drive shaft speed. The value of Fbec should conform to Equation (18). where Fuic is the equivalent force of the centrifugal force generated by the moving cone, and eui is the equivalent eccentric distance of the moving cone. Based on the lever principle, the vector force Fbeci can be expressed as where Fbeci is the force acting on the moving cone by the feedback mechanism, R1 is the constraint reaction of spherical joint O1, and Gc is the gravity of the moving cone. Compared with Equations (16) and (19), it can be seen that the inertia cone crusher with the dynamic balancing mechanism adds the feedback force compared with the single exciter crusher. Therefore, the dynamic balancing mechanism can obviously decrease the amplitude of the fixed cone and increase the crushing force. In this way, the mass of the fixed and moving cone is minimized, and the manufacturing cost decreases significantly. Experimental Devices Corresponding experiments are validated to verify the dynamic balancing mechanism. However, it is an impossible task for us to manufacture an industrial-scale inertia cone crusher with the dynamic balancing mechanism. In this paper, a laboratory prototype with the same dynamic balancing mechanism is developed. The amplitude, power draw and product size distribution of the laboratory crusher can be collected by some experimental devices, and the results are compared with the crusher without the dynamic balancing mechanism. The crusher without the dynamic balance can provide the same theoretical crushing force as the dynamic balance prototype. The experimental devices and dynamic signal acquisition systems are shown in Figure 12. The feed material is a 7.5-10 mm white marble, and the amplitude, power draw and product size distribution for the two driving speed levels (450 and 650 rpm) are compared in the following sections. Amplitudes of Test Points The displacements of two test point are sampled by displacement sensors in Figure 12a. The experimental data of two test points in the x direction are displayed for the drive speeds of 450 rpm and 650 rpm, as shown in Figure 13. In Table 3, the test data from the balancing crusher and without balancing mechanism are summarized. The results show that the amplitude and deflection angle of the balancing crusher are decreased by 80.6% and 64.2%, compared with the crusher without a balancing mechanism. Therefore, the good vibration reduction performance of the dynamic balancing mechanism is verified by experimental comparison. Power draw and Product Size Distribution The input power of the motor is sampled by an electrical parameter test instrument in Figure 12b. The experimental data are displayed for the drive speeds of 450 rpm and 650 rpm, as shown in Figure 14. Figure 14 shows that the average power of the balancing crusher is decreased by 20.9%, compared with the crusher without a balancing mechanism. Therefore, the dynamic balancing mechanism can effectively reduce the energy consumption and running cost of the inertia cone crusher. The product size distribution is collected by square sieves in Figure 12c. The experimental data are displayed for the drive speeds of 450 rpm and 650 rpm, as shown in Figure 15. The product size distribution displays a relatively good correspondence between the balancing mechanism and without the balance. It can be seen that the mechanism realizes the purpose of utilizing the inertia force and kinetic energy of the balancer. Optimization Verification of Industrial-Scale Inertia Cone Crusher To verify the optimization performances and two mass variables (FM and MM), the simulated experiments using MBD-DEM coupling are performed on the industrial-scale inertia cone crusher with the dynamic balancing mechanism. The fixed cone mass and the moving cone mass (including the dynamic balancing mechanism mass) are 6.44 t and 3.82 t, respectively. Compared with Section 3.4, the optimized crushing force achievement rate is over 95% using with the dynamic balancing mechanism. Furthermore, the optimized amplitude and average power are decreased by 33.1% and 10.2%, respectively, as shown in Figure 16. In Figure 16c shows the product size distributions for the case balancing mechanism and the case without are compared. It can be seen that the balance case is slightly finer than the without-balance case. Due to the amplitude, the average power and product size distribution have been improved, and the good crushing performance of the dynamic balancing mechanism is verified in the industrial-scale inertia cone crusher. Furthermore, the fixed cone mass and the moving cone mass are decreased by 78.9% and 22.8%, respectively. The dynamic balancing mechanism significantly reduces the manufacturing cost. Conclusions In the inertia cone crusher design, an inevitable problem concerns how to determine the two mass variables (fixed cone mass and moving cone mass), which can affect the crusher dynamic performances. Firstly, the crushing process of the inertia cone crusher is simulated using the MBD-DEM coupling. Predictive regression models, in which the two mass variables are taken as influencing factors, and the crushing force achievement rate, amplitude and average power are taken as the performance goals, are explored by the design of simulation experiments. In addition, it is found that when the achievement rate ηf is over 90%, the decrease in amplitude As and average power Pa, and the increase in ηf, are very small with increasing FM. Due to the optimized values of ηf and FM being 90% and 30.54 t, the optimized amplitude As, average power Pa, and MM are decreased by 37.1%, 33.1% and 10.2%, respectively, compared with the original crusher. The optimized FM is increased by 52.7%, which increases the manufacturing cost. In this paper, a new and more cost-effective dynamic balancing mechanism of inertia cone crusher is achieved in order to reduce FM. The vibration reduction and inertia force utilization of the dynamic balancing mechanism are verified by the elementary prototype of the laboratory experiment. Moreover, the effect of FM and MM reduction is verified by the MBD-DEM simulation of the industrial inertia cone crusher. Compared with the without-balance case (Section 3.4), the amplitude, average power and product size distribution have been improved, and the FM and MM are decreased by 78.9% and 22.8%. As such, from a manufacturer's perspective, the manufacturing cost decreases significantly. In order to reduce the running costs for users, the future work will prioritize manually using the design of simulation experiments around the optimum drive shaft speed, the eccentric distance of eccentric vibrator and the discharge gap. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-10T00:03:04.603Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "afe144275d814f688d33e73d8a7f24d09fb7a9a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-163X/11/2/163/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "2d918a1a5b8a2847250384a2dccd9b8db4c01495", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
255819096
pes2o/s2orc
v3-fos-license
Generation of a de novo transcriptome from equine lamellar tissue Laminitis, the structural failure of interdigitated tissue that suspends the distal skeleton within the hoof capsule, is a devastating disease that is the second leading cause of both lameness and euthanasia in the horse. Current transcriptomic research focuses on the expression of known genes. However, as this tissue is quite unique and equine gene annotation is largely derived from computational predictions, there are likely yet uncharacterized transcripts that may be involved in the etiology of laminitis. In order to create a novel annotation resource, we performed whole transcriptome sequencing of sagittal lamellar sections from one control and two laminitis affected horses. Whole transcriptome sequencing of the three samples resulted in 113 million reads. Overall, 88 % of the reads mapped to the equCab2 reference genome, allowing for the identification of 119,430 SNPs. The de novo assembly generated around 75,000 transcripts, of which 36,000 corresponded to known annotations. Annotated transcript models are hosted in a public data repository and thus can be easily accessed or loaded into genome browsers. RT-PCR of 12 selected assemblies confirmed structure and expression in lamellar tissue. Transcriptome sequencing represents a powerful tool to expand on equine annotation and identify novel targets for further laminitis research. Background Laminae are interdigitated dermal and epidermal tissues found in the hooves of livestock that form the attachment to the distal skeleton. Equids have an additional specialization in the form of secondary laminae that project from the primary laminae which further increase the surface area and thus strengthen this connection [1]. The junction between dermal and epidermal laminae must be strong enough to withstand the forces of weight bearing and motion without separation, while providing sufficient flexibility to absorb concussive forces and allow growth. Inflammation of the laminae (laminitis) is a devastating disease that can lead to separation of these tissues and a rotation of the third phalanx (P3) away from the hoof wall. The etiology of laminitis is poorly understood. Many risk factors have been identified in the horse, including inflammation in other parts of the body, sepsis, metabolic conditions, or mechanical stress [2]. Currently, as there are very few treatments available, prevention through avoiding known risk factors is recommended. In the early stages of laminitis (either pre-clinical symptoms or at the onset of lameness), prolonged cooling of the hooves in ice water has been shown to reduce severity of the disease and prevent separation of the laminae [3]. However, if adequate treatment is not provided promptly, euthanasia is often the result. A study from the United States Department of Agriculture in 1998 estimated the annual cost of lameness at $678 million, with laminitis accounting for 15 % of the reported cases [4]. The American Association of Equine Practitioners has specifically identified laminitis as the disease most frequently reported as needing more research [5]. Several methods have been devised to experimentally induce laminitis, including carbohydrate overload, oligofructose overload, and black walnut extract administration. Although all of these models will result in the disease, key differences in physiological response (as compared to the natural etiology) have been demonstrated [6,7]. However, as natural cases can be much more difficult to acquire, these models continue to serve an important role in research. Gene expression has been applied in studies to better understand the disease process. However, much of this research has focused on the expression of few known genes, using qPCR to target specific pathways [8][9][10][11]. Only two studies have attempted a transcriptome-wide view of laminitis. The first commercially available wholetranscriptome equine-specific microarray was not published until 2009, therefore early studies attempted two different approaches. The first study chose to use crossspecies hybridization with the bovine gene expression chip, identifying 155 out of the 15,000 genes assayed to be significantly up-regulated [12]. They were unable to identify any down-regulated genes, which was likely due to the high false-negative rate associated with imperfect hybridization. A second study instead generated a custom equine-specific array with 3076 targets derived from leukocyte EST libraries [13]. Less than 100 of these genes were found to have significant differential expression. Both of these projects, and any current work utilizing microarrays, are hindered by insufficient genome annotation in the horse. The only major annotation attempt used an older sequencing technology, generating 35 bp reads from eight diverse tissue types [14]. They identified that 48 % of genes displayed tissue-specific expression patterns, with 7 % of the genes only found in one tissue type. However, this data was not incorporated into automatic annotation pipelines for the popular genome browsers, and lamellar tissue was not included in sequencing. Using this data, the authors also demonstrated there were 428 genes completely lacking in equine annotation, even though many of these genes have data in other species [15]. Whole transcriptome sequencing (RNA-seq) is a promising solution for interrogation of gene structure and expression, especially in a divergent tissue like the hoof. RNA-seq is a hypothesis-free examination of all cDNA in a given sample, allowing for the identification of unique features such as unannotated transcription, splice sites, allele-specific expression, anti-sense expression, and alternative poly-adenylation [16][17][18]. Additionally, technical variation is reportedly low, with high reproducibility between lanes [19]. Studies have continuously demonstrated high correlation between microarray differential expression studies and RNA-seq strategies, noting the main difference is improved sensitivity for low-abundance transcripts by RNA-seq [20,21]. However, as RNA-seq is still considerably more expensive and computationally intense than microarrays, much mainstream research still relies on microarrays or qPCR. The objective of this study was to produce a transcriptome resource for the study of laminitis. Given that recent studies rely heavily on qPCR, the generation of a set of equine, hoof-specific transcripts can greatly benefit in the selection of novel targets for expression studies. Current annotation is largely based on computational predictions and gene models from other species, among which there is not a good physiological model for the laminae. Additionally, while there have been a few equine RNA-seq studies, raw data is often only placed in public databases and not fully processed or curated [14,[22][23][24]. Thus these valuable datasets are difficult to access and may require intensive bioinformatic analysis before use in subsequent projects, and sadly are often underutilized. Illumina sequencing and assembly Whole transcriptome sequencing of the three samples in this experiment generated a total of 112,979,003 reads. Sequencing data from all three individuals was pooled for assembly in order to capture genes that may be rare or unique to the laminitic state. After filtering, 87,598,529 high-quality reads remained. The iAssembler pipeline was used to correct for misassemblies due to heterozygosity (either within or between individuals) [25]. A summary of assembly metrics can be found in Table 1 [26]. The number of unigenes (unique transcripts) mapped per locus ranged from 1 to 139, averaging 2.44 isoforms representing 25,580 loci. Many of these unigenes are shorter transcripts covering only a single exon or splice junction, partially due to low-expression transcripts lacking sufficient coverage for assembly ( Fig. 1). Considering only the longer 3+ exon transcripts resulted in similar statistics (Table 2). Overall, 88 % of raw sequencing reads mapped to the equCab2 reference genome [27]. The GATK recommended pipeline identified a total of 131,034 SNPs [28][29][30]. We filtered the assembly to remove any alignments matching repeat regions, and then removed SNP calls that fell outside of our transcript models, reducing potential false positive SNPs resulting from incorrectly mapped spliced reads. The 119,430 SNPs that remained (91.1 %) were submitted to dbSNP at NCBI (Table 3). Annotation with known gene and protein databases Using blastx, a total of 36,195 unigenes (48 %) matched to proteins in the non-redundant database (significance defined as an E-value less than 1e-5). To simplify the analysis, only the top hit for each unigene was retained. 35 % of the matches were to equine proteins, and of these, 97 % were computationally derived entries (XP_ accession numbers). Additionally, unigenes aligned by BLAT to the equine genome were compared to the NCBI horse RefSeq, NCBI non-horse RefSeq, and Ensembl prediction tracks available from the UCSC Genome Browser. A summary of overlap between the known databases is provided in Table 4. Gene IDs were assigned to each unigene based on matches to the non-redundant protein database or RefSeq alignments, resulting in annotation of 44,730 transcripts. Unannotated transcripts retained their identifier provided by Trinity. These transcripts likely correspond to novel genes or non-coding RNA and were selected for further examination. This annotated alignment can be loaded into commonly used genome browsers to supplement existing annotation ( Fig. 2) [31]. Amplification and sequencing of cDNA from putative novel transcripts There were a total of 13,632 unigenes with 3 or more exons that did not match to known RefSeq annotation. Of these, there were 4,718 that did not overlap with other unigenes. A subset of 12 unique transcripts that contained ORFs which spanned over 3 exons were selected for molecular validation (Table 5). RT-PCR successfully amplified cDNA from all selected transcripts. All products were of the expected length and Sanger-derived sequences matched completely with assembled sequences (example in Fig. 3). As differential expression was not the goal of this study, no quantitative analyses were attempted. However, one selected transcript did display a qualitative trend for disease-specific expression (Fig. 4). The best protein match (placenta-specific protein 1 precursor), located on ECAX, is a computational prediction with support from 1 equine mRNA and 85 % coverage of RNA-seq alignments from one sample in the short-read archive. The only other equine-specific protein match was to a homologous gene, placenta-specific protein 1-like (E = 4e-9), which was mapped approximately 100 kb downstream of the unigene alignment on chromosome 12. However, this record is completely computationally derived, supported only by similarity to two proteins. Discussion and conclusion We utilized RNA-seq to successfully generate a transcriptome assembly of equine lamellar tissue. As the hoof is a specialized tissue, it likely has unique transcripts that previous annotation efforts would have missed. By pooling data from healthy and diseased tissues, we have captured loci that should be valuable to future differential expression studies. Though the varied physiological states could result in differences between each transcriptome, pooling the data prior to assembly ensures sufficient power to assemble lower expressed loci. This data set represents a valuable tool for laminitis research, providing information on both known genes expressed in the hoof, as well as a wealth of previously unannotated transcripts. The transcripts identified in this study can now be utilized with other technologies to search for novel targets with relevance to laminitis. RNA-seq provides unprecedented power for transcript and isoform discovery. However, relatively little of this information trickles down to human-readable annotation and applied datasets useful to the average molecular biologist. While some resources now exist that attempt to bridge this gap by providing bioinformatics instruction for molecular biologists, this approach is not practical for all researchers [32]. Our newly generated data is available in two ways. Raw reads and identified variants have been deposited in public databases, so that they may be accessed or incorporated into automated pipelines. NCBI has recently begun to advantageously incorporate RNA-seq data from the short-read archive into their RefSeq annotation pipeline, and the inclusion of additional unique tissue types is essential for robust annotation from this automated analysis. However, these updated annotations (especially computational predictions) are not always readily accessible in popular genome browsers. Therefore, we have also provided downloadable Browser Extensible Data (BED) tracks of our assembly. The first file, labeled the "full" assembly, includes models of any number of exons. We have also provided only those models with 3 or more exons (the "larger" assembly) in order to remove partial transcripts likely originating from poorly expressed loci and intronless non-coding RNAs. The BED format is small and much easier to use than the raw sequencing data itself, including only the positions of each feature (not the exact sequence). BED files also are quite easy for individual researchers to load gene model annotation into their browser of choice [33]. Our data also includes potential non-coding RNAs, which are an emerging field of research. As the RefSeq set is specifically designed for protein-coding genes, all other transcript types are not given accession numbers. There are existing databases of non-coding RNAs available for the human and mouse genomes, however for all other species, there are only the few (less than ten) entries manually curated from the literature [34]. Unlike protein-coding genes, there is considerably less sequence conservation between species in non-coding RNAs, necessitating within species identification [35]. Within non-coding RNAs, there are two main classes: small (<200 bp) and long (>200 bp) [36]. While long noncoding RNAs are often picked up in normal RNA-seq experiments (and must be separated from proteincoding mRNAs for analysis), the smaller molecules are often excluded in normal RNA-seq library preparation, and require additional methodologies to sequence. The function of non-coding RNAs has been the subject of recent controversy. It is debatable whether the observed RNA transcription is biologically relevant, or if transcription may simply be technical noise [37,38]. Well documented functions for non-coding RNA include regulation of the genome (through chromatin modification, DNA binding, and protein binding) and of cellular differentiation during development [39][40][41]. One of the most well-known non-coding RNAs is XIST, which regulates X chromosome inactivation in females. More recently, several mutations that cause overexpression of a conserved long non-coding RNA proved to be responsible for the bovine polled phenotype [42]. It is thus important to consider all possible RNAs in studies of differential expression, instead of only the proteincoding transcripts. Utilization of this data in studies of laminitis could identify new targets and pathways to help further our understanding of the etiology. Whereas current veterinary methods generally can only detect laminitis at the onset of lameness, the development of biomarkers could allow for rapid identification (and thus the most effective treatment) of cases before permanent damage occurs. Future understanding of the precise pathways underlying laminitis could lead to vital novel prevention methods and treatments. Sample collection and transcriptome sequencing Samples were collected from four horses presented for necropsy for disposal to the Cornell University College of Veterinary Medicine (samples labeled CU). An additional two lamellar samples were provided by a collaborator (labeled LSU). Medical history was collected when available. Full-thickness, mid-sagittal hoof sections were placed on ice for transport to the lab, gross examination, and dissection of lamellar tissue. Samples were placed into RNA later (Life Technologies, Carlsbad, CA, USA) and stored at -80°C until processing. Phenotype was assessed through medical history, physical exam prior to euthanasia, and gross findings. Control animals were defined by the distal phalanx running parallel to the hoof wall, with no bruising or thickening of the laminae. Acute cases often had some degree of rotation and/or sinking, as well as lamellar hemorrhage, edema, and thickening. Chronic cases were defined by thickened, fibrous lamina; variable resorption and/or remodeling of the distal phalanx, often with rotation and/or sinking; and variably severe chronic hemorrhage. Sample information can be found in Table 6. RNA was extracted from approximately 60 mg of lamellar tissue using the Qiagen RNeasy kit (Qiagen Inc., Valencia, CA, USA) following manufacturer's protocols for fibrous tissue. 50 μL of RNA was DNase treated using either the Ambion Turbo DNA free kit (Life Technologies, Carlsbad, CA, USA) or Qiagen DNase I kit, followed by Qiagen RNA cleanup kit. Quantification was carried out using a NanoDrop spectrophotometer (Nano-Drop Technologies LLC., Wilmington, DE, USA). Library preparation and sequencing was performed by Cornell University's Life Sciences Core Laboratory Center. A total of 5-10 μg of RNA from each sample De novo assembly Raw RNA-seq reads were processed in two steps. First, a custom R script (based on the ShortRead package) was used to remove adapter and barcode sequences, as well as to trim low quality (Q < 20) bases from both ends of the reads [43]. Trimmed reads shorter than 25 bp were discarded. Second, reads were aligned to the GenBank virus (version 186) and ribosomal RNA sequence databases with BWA under default parameters [44]. Only unmapped reads were retained for assembly. The filtered reads from all samples were pooled and de novo assembled into contigs using Trinity with "min_kmer_cov" set to 2 [45]. In order to remove some of the redundancy of Trinity-generated contigs, a further assembly step using iAssembler with a minimum of 99 % identity (-p) was performed [25]. Contigs shorter than 200 bp were discarded. Unigene annotation All unique transcripts (unigenes) were compared to the GenBank non-redundant protein database using blastx with an E-value cutoff of 1e-5. Only the protein with the lowest E-value (and thus highest significance) was retained for further analysis. Unigenes were also aligned to the equCab 2.0 reference genome using BLAT with parameters recommended for same-species mRNA alignments [46]. The pslCDnaFilter tool was used to remove alignments with less than 200 bp, 98 % identity, or 50 % coverage. The resulting PSL file was converted to BED format and compared with Equinespecific repeat annotation using BEDtools intersectBed in order to filter out alignments that contained over 10 % repetitive DNA [47,48]. Many retroviruses in the genome are expressed, but high homology among these elements often leads to chimeric and spurious assemblies, and thus creates problems for alignment-based analyses. The filtered unigenes were then compared to NCBI Non-Horse RefSeq, Horse RefSeq, and Horse Ensembl annotations using intersectBed at 10 % overlap. Putative gene names were assigned to unigenes based on high quality matches to NCBI non-redundant databases. Two BED files were produced for use in genome browsers (one containing all transcripts and one with only large transcripts containing 3 or more exons) [31]. Variant calling Raw sequencing reads were split by barcode and aligned to the EquCab 2.0 reference genome using BWA under default parameters. SAMtools was used to convert alignments to BAM format and to remove PCR duplicate reads [49]. SNPs were identified with GATK using the recommended pipelines with a Q > 30 cutoff [28][29][30]. VCFtools was then used to filter out variants with fewer than 10 observations, followed by BEDtools to remove variants that fell outside of regions with corresponding assembly alignments [50]. The final list of variants was pooled and submitted to NCBI dbSNP. Analysis of putative novel loci We screened the transcriptome assembly for novel loci with two steps. First, a second genome alignment was prepared by running RepeatMasker (using RepBase 2013-04-22 libraries) on the unigenes, then BLAT and subsequent filtering was performed as before [51,52]. Next, the unmasked and masked alignments were compared, and unigenes that passed filtering criteria in both datasets were selected. The unmasked alignments of these unigenes were then compared to RefSeq annotation using BEDtools, and alignments with less than 5 % overlap to known annotation were labeled as putative novel loci. All matches to the unassembled chromosome (chrUn) were discarded. Although valuable novel genes are likely to be found there, the incomplete state of assembly in this region makes downstream alignment based analyses problematic. Twelve novel genes were selected for RT-PCR validation and proof of concept based on additional criteria. ExPasy "translate" tool was used to identify open reading frames (ORFs) in these unigenes [53]. These were then aligned back to the equCab 2.0 reference genome using BLAT, and only unigenes with ORFs spanning at least three exons on their corresponding transcript annotation were retained, thus identifying larger transcripts with significant exon/intron structure. The ORFs were then compared to the non-redundant protein database using blastp, and targets with little to no experimental data were selected for further validation. Within each gene, an amplicon of cDNA was targeted using intron spanning primers created with the Primer3 software (Table 7) [54]. Two-step RT-PCR was performed in 15 μL reactions with 1 μg RNA using the SuperScript VILO MasterMix kit (LifeTechnologies, Carlsbad, CA, USA) followed by standard PCR. 1 μL of cDNA was amplified in 10 μL PCR with FastStart Taq DNA polymerase (Roche Applied Science, Branford, CT, USA) and included all reagents per the manufacturers recommended conditions. Amplification was verified on 3 % agarose gel, and the resulting PCR products were submitted to the Cornell Core Life Sciences Laboratories Center for sequencing using standard ABI chemistry on a 3730 DNA Analyzer (Applied Biosystems Inc., Foster City, CA, USA). Amplicons were aligned to their corresponding unigenes to confirm identity using Consed [55].
2023-01-15T14:08:30.618Z
2015-10-03T00:00:00.000
{ "year": 2015, "sha1": "dbb8f855dd823ff71aba2594a781d9689141b2b4", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-015-1948-8", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "dbb8f855dd823ff71aba2594a781d9689141b2b4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
1725490
pes2o/s2orc
v3-fos-license
A Well-Mixed Computational Model for Estimating Room Air Levels of Selected Constituents from E-Vapor Product Use Concerns have been raised in the literature for the potential of secondhand exposure from e-vapor product (EVP) use. It would be difficult to experimentally determine the impact of various factors on secondhand exposure including, but not limited to, room characteristics (indoor space size, ventilation rate), device specifications (aerosol mass delivery, e-liquid composition), and use behavior (number of users and usage frequency). Therefore, a well-mixed computational model was developed to estimate the indoor levels of constituents from EVPs under a variety of conditions. The model is based on physical and thermodynamic interactions between aerosol, vapor, and air, similar to indoor air models referred to by the Environmental Protection Agency. The model results agree well with measured indoor air levels of nicotine from two sources: smoking machine-generated aerosol and aerosol exhaled from EVP use. Sensitivity analysis indicated that increasing air exchange rate reduces room air level of constituents, as more material is carried away. The effect of the amount of aerosol released into the space due to variability in exhalation was also evaluated. The model can estimate the room air level of constituents as a function of time, which may be used to assess the level of non-user exposure over time. Introduction With the rapid rise in the use of e-vapor products (EVPs), including e-cigarettes and tank devices, public health agencies and U.S. Food and Drug Administration (FDA) have expressed concern about the potential for exposure of non-users to e-cigarette aerosols [1,2]. In 2014 and 2015, FDA Center for Tobacco Products sponsored three public workshops on e-cigarettes. The published proceedings of these workshops called for additional research on exposure and health effects from second-and third-hand exposure to e-cigarette constituents [3]. Second hand aerosol refers to the exhaled aerosol in air and third hand aerosol refers to aerosol deposited on the surfaces in the room. Some of the questions of interest raised at these workshops included (1) How far do aerosols travel in a confined environment? (2) How do exhaled aerosol properties impact second-hand and third-hand exposures, including what chemicals/toxicants are potentially delivered to non-users? (3) What are the potential impacts of e-cigarette use on the levels of particulate matter and chemicals/toxicants in enclosed spaces such as cars, homes, office settings, and public buildings? VOCs, and other constituents in a small meeting room following e-cigarette use. They concluded that exposure of bystanders to the chemicals in the exhaled e-cigarette aerosol, at the levels measured in the study, were below current regulatory standards that are used for workplaces or general indoor air quality. In another experiment [18], glycerol was detected during the e-cigarette vaping session, but nicotine, acrolein, toluene, xylene, nitrogen oxides, carbon monoxide (CO), and polycyclic aromatic hydrocarbons (PAHs) were not detected. Schober and colleagues [19] reported increases in the room level of 1,2-propandiol, glycerol and nicotine during the vaping session compared with measurements taken on a different control day, with no subject present in the room. Concentration of benzene, acetone, acrolein, and formaldehyde generally did not exceed background levels. The authors reported 30%-90% increases in the sum of 16 PAHs during the vaping sessions as compared with control conditions. However, Farsalinos and Voudris [20] suggested that the differences between control and the vaping sessions could have been in part due to the difference in the level of PAHs present in the environment on the two different testing days. In addition, differences in usage and/or inhalation rates between the vaping sessions, surface deposition, etc. could also account for some of the differences in the results. Schripp et al. [21] also reported slight increases in the amount of aldehydes measured in a test chamber when e-cigarettes were used, compared to when no product was used. They attributed the presence of formaldehyde, acetone, and acetic acid, when no product was used, to the presence of these compounds in human exhaled breath [22,23]. Another study measured indoor air concentrations from e-cigarette use, using validated industrial hygiene sampling methodologies [24]. The study included a large number of participants (n = 185 Study 1; n = 145 Study 2), and active samples were collected over a 12-h period, for four days. Data from the study also indicated that the majority of chemical constituents sampled were below quantifiable levels of the analytical methods [24]. Data from two studies were used to validate our model. The first set of data came from our controlled clinical study, in which the exhaled aerosol constituents in room air were measured following the use of selected EVPs. In this study, levels of nicotine, propylene glycol, glycerol, 15 carbonyl compounds, 12 volatile organic compounds, and 4 trace metals were measured using ISO or EPA methods [25]. The second dataset is from Czogala et al. [26] who used a smoking machine to generate aerosol. The study measured nicotine, aerosol particle concentration, CO, and VOCs in a chamber where cigarettes and e-cigarettes were used. Nicotine was measurable during the puffing sessions, and was found to be approximately 10 times lower than the levels present during cigarette smoking. The authors concluded that the use of e-cigarettes does not result in significant amounts of VOCs and CO being emitted [26]. We developed and validated a well-mixed computational model that is based on principles similar to those used in the indoor air quality assessment models, referred to by the EPA. The model predicts vapor-particle partitioning and concentration of chemical constituents of aerosol over time, as it travels through a defined indoor space. The model is based on physical and thermodynamic interactions between air, vapor, and particulate phase of the aerosol. These processes are mathematically represented by a set of simultaneous equations including conservation of mass, vapor/liquid partitioning, air flow and species transport, and mixing processes. A number of sensitivity analyses have been performed to evaluate the impact of various parameters that affect the indoor concentration of exhaled aerosol and will be discussed in the paper, along with details on model development and validation. Physical Basis of the Model The levels of particulate matter and chemical constituents present in a confined space as a result of EVP use depends on (1) the amount of each chemical released into the indoor space upon exhalation by EVP users and (2) dilution of the aerosol due to dispersion and ventilation, as it travels through the confined space. The amount of aerosol and chemicals generated by EVP usage depends on many factors, including liquid composition, device performance, and user behaviors. However, only a fraction of the aerosol inhaled by the EVP user is subsequently exhaled. In addition, the aerosol released into the indoor space undergoes rapid and dynamic change in composition, concentration, and particle size distribution due to dilution by air within the space and ventilation air. The size of the indoor space, the amount and composition of the exhaled aerosol, and the frequency of usage are all important parameters that affect the level of constituents in the indoor space. The concentration of aerosol that is released into an indoor space rapidly drops as the aerosol is diluted with air. Furthermore, volatile constituents in the aerosol evaporate and result in the shrinking of particle size and changing its composition [14,27]. This phenomenon is easily visible when aerosol from an EVP is exhaled into air as compared with cigarette smoke, which is more stable. The reason for the difference is that most constituents in EVP aerosol are more volatile than constituents in cigarette smoke [28]. As evaporation continues, mixture composition in particles changes, which requires updating the mole fraction of each constituent in order to properly capture the rate of evaporation. This process is transient in time and a vapor-liquid-equilibrium relationship must be used at each time step. Mathematical Representation The mathematical representation of a well-mixed model is presented here. Aerosol with a prescribed chemical composition, particle size, and mass density is released into a confined space at a prescribed function of time. The space is ventilated with fresh air at a rate characterized by an air change per hour (ACH). The particles generally shrink due to evaporation of constituents into air, due to dilution. The time scales of interest are much larger than evaporation and mixing times, so that thermodynamic phase equilibrium between the particle and vapor is assumed to hold. Definitions of the terms listed within the equations below are presented in the "Nomenclature" section. At time t, i = 1, 2, . . . N constituents are present in the space, in the vapor (v) and liquid (l) phases. The mass balance for each constituent requires: where m i must be updated for subsequent time steps to account for the mass of this constituent that is released into the space minus that which is carried out by the ventilation air during the corresponding time increment: where Q a denotes the air ventilation rate. Most constituents enter the space only through the aerosol that is exhaled or machine-generated. However, water enters the equation through multiple sources including: as a constituent in the aerosol, moisture in the air carrying the aerosol, moisture in the room air and in the ventilation air. All sources have been accounted for in the water mass balance. The instantaneous concentration of constituent i in the room at time t is defined as where V r represents the volume of the indoor space. The vapor phase concentration (vapor density) of each constituent in air can be expressed as where subscript i refers to all variables within the parenthesis in the numerator. It is important to note that at each time step as the concentration of constituent i changes due to dilution with incoming air (or moisture in air for the case of water content of a particle), the mole fraction of constituent i, x i , as well as its mass fraction in particle, y i , will also change. These two are related through mixture relation: Another thermodynamic relationship that will be used is the molecular mass of the mixture in particles, which also varies with time. In terms of the individual mole fractions and molecular mass, it is expressed as: Using these relationships for each of N constituents, rearranging, and combining some of those we arrive at N simultaneous algebraic equations with N unknowns: The summation in Equation (7) over j includes all constituents except i, that is j = 1, 2, . . . i − 1, i + 1, . . . N. The new variable w i is defined as the ratio of mass of i in liquid to the total mass of i in the space at time t: Equation (7) was simultaneously solved for w i for N constituents, at every time step, while using previous equations as needed. Initially all m i as well as the liquid vapor partitioning (m li (t = 0) and m vi (t = 0)) are assumed to be known. An iterative method has been used for each time step until all conditions above, including Equation (6), are met. The fsolve function in Matlab ® (The MathWorks Inc., Natick, MA, USA) was used in the following examples to solve the system of equations. Input Variables Four categories of input data were needed to run the model: indoor space size and ventilation rate, air temperature and humidity, properties and rate of aerosol released into the indoor space, and thermodynamic properties of the constituents of interest. The indoor space size and ventilation rate greatly affect the concentration of constituents. The dimensions of the space (volume) and ACH were also required to run the model. If ventilation included fresh air as well as recirculated air for humidity and temperature control, the volume of ducts carrying the recirculated were included in the space volume, but only fresh air was included in the ventilation rate. Temperature within the space is also an important parameter in vapor-liquid partitioning and was included in the input data. The aerosol temperature at the time of release into the space is a relevant parameter that defines the vapor-liquid partitioning of each constituent entering the space. However, it is reasonable to assume that the air temperature is not affected by the aerosol temperature as the aerosol mass is significantly less than the air mass of the indoor space. Properties of aerosol released into the space included aerosol mass, particle size, composition, and the amount of each constituent in vapor and particulate phase. The exhaled aerosol mass, for a given EVP, is highly variable, depending on the vaping habit of the EVP user. For example, some users tend to inhale deeply, while others prefer to exhale after a brief mouth hold. More exhaled aerosol will enter the space in the latter case. The composition of the exhaled aerosol is considerably different from that of the e-liquid in the device. Constituents with high vapor pressure and higher water solubility tend to be absorbed more in the respiratory tract, whereas a higher percent of the inhaled aerosol will be exhaled for the less volatile constituents. The aerosol, depending on the composition can absorb a substantial amount of water during the inhale-exhale process due to the high moisture content of the airways in the lung. Therefore the exhaled aerosol consists of a significant amount of water [13]. Measuring the exhaled aerosol mass and composition released into a confined space during EVP use is a challenging task. Exhaled aerosol properties can be measured by collecting and analyzing the exhaled breath condensate (EBC). However, it is not clear that EVP users exhale in the indoor space the same way that they exhaled into an EBC system. Furthermore, there is substantial inherent variability in how individual users inhale and exhale. Given these challenges, it is generally preferred to validate the model with experimental data in which the aerosol is released into the space at a rate that is controlled. This can be accomplished by generating aerosols using a smoking machine, which has a defined puff duration and puff rate, and directly releasing the aerosol into the space [26]. The level of constituents measured in the space can then be used to validate the model. Once the model is validated, it can be used to estimate the concentration of constituents in an indoor space where EVPs are being used, under a variety of usage and environmental conditions. The last set of input data for the model was the thermodynamic properties of constituents present in the aerosol, including water, which, because of abundance in air, plays an important role in the final composition. These properties included vapor pressure of each constituent as a function of temperature, activity coefficient, molecular weight, and relative humidity of the indoor space and ventilation air. Results and Discussion Results from two aerosol sources are presented here: (1) smoking-machine generated aerosol and (2) exhaled aerosol released into the indoor space. The concentration of nicotine in the indoor space was estimated using the model and compared with experimental data. Predicted results for glycerol and propylene glycol are also presented. The input data for the two cases were obtained from two separate studies. The data from Czogala et al. [26] were used for the case involving aerosol generated by smoking machines. Although the dispersion of aerosol generated by a smoking machine is not related to the second hand exposure, nevertheless Czogala et al.'s [26] data were used for model validation because of different ventilation rates and aerosol release rates used in the experiment. For the exhaled aerosol case, data from our controlled clinical study were used [25,28]. Smoking Machine-Generated Aerosol Source Czogala et al. [26] used a smoking machine to produce and release aerosol into a ventilated room that measured 39 m 3 . During a 60 min test, aerosols were released twice at time zero and at 30 min. Each release was either at what they defined as a high (15 puffs) or a low (7 puffs) level. The puff duration and volume were 1.8 s and 70 mL, respectively, with a frequency of one puff every 10 s. Three commercial e-cigarettes with two nicotine levels (1.1% and 1.8%-1.9%) were used in their experiments. Two ventilation levels, as described in [26] (approximately 7 and 10 ACH) were used. Overall, 12 combinations of release level, ventilation level, and e-cigarette were tested. Samples from the room were collected over 60 min and analyzed for CO and nicotine. The data from four runs using one e-cigarette (EC2) from Czogala et al. [26] study were used for our modeling examples. Some of the input data are shown in Table 1. The aerosol mass delivery per puff was obtained from Goniewicz et al. [29], and the aerosol composition from the smoking machine was assumed to be the same as the e-liquid in the device. Table 1. Input data from four runs using one e-cigarette as described in the text. The average value of nicotine concentration over 60 min for the four cases as estimated by the computational model, as well as the mean measured data, are shown in Figure 1 along with mean values for these cases. The results may be interpreted using the conditions in Table 1. All conditions were similar during Runs 1 and 2, except for the ventilation level, which was lower for Run 2. At lower ACH, less aerosol was transferred out of the room during the run, and more remained in the room. This can be seen in Figure 1. During Runs 2 and 3, all conditions were almost identical except for the amount of aerosol released into the room, which was higher during Run 3. Figure 1 shows higher predicted nicotine concentration in the room for Run 3. Finally, the conditions for Runs 3 and 4 are very similar, which resulted in similar predicted nicotine concentrations, as shown in Figure 1. The average value of nicotine concentration over 60 min for the four cases as estimated by the computational model, as well as the mean measured data, are shown in Figure 1 along with mean values for these cases. The results may be interpreted using the conditions in Table 1. All conditions were similar during Runs 1 and 2, except for the ventilation level, which was lower for Run 2. At lower ACH, less aerosol was transferred out of the room during the run, and more remained in the room. This can be seen in Figure 1. During Runs 2 and 3, all conditions were almost identical except for the amount of aerosol released into the room, which was higher during Run 3. Figure 1 shows higher predicted nicotine concentration in the room for Run 3. Finally, the conditions for Runs 3 and 4 are very similar, which resulted in similar predicted nicotine concentrations, as shown in Figure 1. The biggest individual difference was found for Run 3, in which the measured nicotine concentration value was lower than predicted. It was expected that the measured values would be similar for Runs 3 and 4, as all the influencing parameters are approximately the same. It is also worth noting that the sampling point in the room was about 1 m from the e-cigarette location [26], whereas the model results were for the room average values. In addition to the location of the sampling point, other factors might account for the difference in the measured values, including the turbulence in the room which is inherently unsteady, the sample collection method, and variations in sample chemical analysis. Run Number 1 Nicotine Level in E-Liquid (%) Aerosol Release Furthermore, a statistical analysis was performed to determine if the indoor air nicotine levels estimated by the model differed from the experimental data. Given that the data do not follow a normal distribution, the Wilcoxon two-sample test, a nonparametric test, was conducted for this comparison. Significance level was set at p-value < 0.05. The results suggested that there is no statistically significant difference between the mean nicotine concentration levels produced by the model and the experiment (z = −0.1443, p = 0.8852) across the four runs. A useful result from the model, which is difficult to obtain experimentally, is the evolution of the concentration of constituents in the room over time. This is particularly important if the aerosol release is highly variable in time. An example for nicotine concentration is shown in Figure 2. As expected, immediately after each aerosol release into the room, the predicted average nicotine concentration in the room is the highest. It drops over time as nicotine is carried out by ventilation air. Transient experimental data was not available for this run. The biggest individual difference was found for Run 3, in which the measured nicotine concentration value was lower than predicted. It was expected that the measured values would be similar for Runs 3 and 4, as all the influencing parameters are approximately the same. It is also worth noting that the sampling point in the room was about 1 m from the e-cigarette location [26], whereas the model results were for the room average values. In addition to the location of the sampling point, other factors might account for the difference in the measured values, including the turbulence in the room which is inherently unsteady, the sample collection method, and variations in sample chemical analysis. Furthermore, a statistical analysis was performed to determine if the indoor air nicotine levels estimated by the model differed from the experimental data. Given that the data do not follow a normal distribution, the Wilcoxon two-sample test, a nonparametric test, was conducted for this comparison. Significance level was set at p-value < 0.05. The results suggested that there is no statistically significant difference between the mean nicotine concentration levels produced by the model and the experiment (z = −0.1443, p = 0.8852) across the four runs. A useful result from the model, which is difficult to obtain experimentally, is the evolution of the concentration of constituents in the room over time. This is particularly important if the aerosol release is highly variable in time. An example for nicotine concentration is shown in Figure 2. As expected, immediately after each aerosol release into the room, the predicted average nicotine concentration in the room is the highest. It drops over time as nicotine is carried out by ventilation air. Transient experimental data was not available for this run. Exhaled Aerosol Source Results from the computational model were then compared with the measured data from our internal clinical study. In this study, exhaled breath measurements were made by asking each of 9 study participants to take 10 puffs of an EVP (5 s duration) and after each puff, directly exhale into an exhaled breath system (EBS) shown in Figure 3 [25]. The EBS consisted of a filter and a cryogenically cooled trap. The collected EBS samples were analyzed for nicotine and other constituents. The e-vapor device used for this experiment was a prototype EVP. The e-liquid composition used in the EVP on a weight basis was approximately 41/42/14.6/2.4 of propylene glycol/glycerol/water/nicotine, respectively. The 10-puff average of machine-delivered aerosol mass for 5 s puff with 55 mL puff volume was measured to be 5.2 mg/puff. The amount inhaled by each study participant was assumed to be the same (5.2 mg/puff). The exhaled breath results for nicotine are shown in Figure 4. The y-axis represents the fraction of inhaled nicotine that is exhaled. The inhaled nicotine amount is assumed to be 2.4% of 5.2 mg/puff, as described above. Figure 4 shows that there is variability in the exhaled fraction of nicotine among individuals, which may be driven by variability in the individual usage behaviors and depth of inhalation. On average, 3.4% of the inhaled nicotine is exhaled, with 7 of 9 participants exhaling less than 3.4%. Since the same participants used the same EVP in the room air level measurement study, Exhaled Aerosol Source Results from the computational model were then compared with the measured data from our internal clinical study. In this study, exhaled breath measurements were made by asking each of 9 study participants to take 10 puffs of an EVP (5 s duration) and after each puff, directly exhale into an exhaled breath system (EBS) shown in Figure 3 [25]. The EBS consisted of a filter and a cryogenically cooled trap. The collected EBS samples were analyzed for nicotine and other constituents. The e-vapor device used for this experiment was a prototype EVP. The e-liquid composition used in the EVP on a weight basis was approximately 41/42/14.6/2.4 of propylene glycol/glycerol/water/nicotine, respectively. The 10-puff average of machine-delivered aerosol mass for 5 s puff with 55 mL puff volume was measured to be 5.2 mg/puff. The amount inhaled by each study participant was assumed to be the same (5.2 mg/puff). Exhaled Aerosol Source Results from the computational model were then compared with the measured data from our internal clinical study. In this study, exhaled breath measurements were made by asking each of 9 study participants to take 10 puffs of an EVP (5 s duration) and after each puff, directly exhale into an exhaled breath system (EBS) shown in Figure 3 [25]. The EBS consisted of a filter and a cryogenically cooled trap. The collected EBS samples were analyzed for nicotine and other constituents. The e-vapor device used for this experiment was a prototype EVP. The e-liquid composition used in the EVP on a weight basis was approximately 41/42/14.6/2.4 of propylene glycol/glycerol/water/nicotine, respectively. The 10-puff average of machine-delivered aerosol mass for 5 s puff with 55 mL puff volume was measured to be 5.2 mg/puff. The amount inhaled by each study participant was assumed to be the same (5.2 mg/puff). The exhaled breath results for nicotine are shown in Figure 4. The y-axis represents the fraction of inhaled nicotine that is exhaled. The inhaled nicotine amount is assumed to be 2.4% of 5.2 mg/puff, as described above. Figure 4 shows that there is variability in the exhaled fraction of nicotine among individuals, which may be driven by variability in the individual usage behaviors and depth of inhalation. On average, 3.4% of the inhaled nicotine is exhaled, with 7 of 9 participants exhaling less The exhaled breath results for nicotine are shown in Figure 4. The y-axis represents the fraction of inhaled nicotine that is exhaled. The inhaled nicotine amount is assumed to be 2.4% of 5.2 mg/puff, as described above. Figure 4 shows that there is variability in the exhaled fraction of nicotine among individuals, which may be driven by variability in the individual usage behaviors and depth of inhalation. On average, 3.4% of the inhaled nicotine is exhaled, with 7 of 9 participants exhaling less than 3.4%. Since the same participants used the same EVP in the room air level measurement study, the total exhaled constituents were used as input datum for the computational model to predict the indoor air concentrations. The controlled clinical study was conducted in a mobile environmental exposure chamber (mEEC) (Inflamax Research, Mississauga, ON, Canada), as shown in Figure 5. The 113 m 3 mEEC was ventilated and conditioned for temperature. The air circulation rate was 1190 m 3 /h, of which 255 m 3 /h was fresh air that was mixed with the recirculated air. The ACH, based on the fresh air, was calculated to be 2.25 h −1 . Only the fresh-air rate was used for computational purposes, as the recirculated air does not have any significant effect on the total concentration of the constituents in the mEEC, other than contributing to a better mixing of air in the exposure chamber. In one study [30], the same 9 participants, whose exhaled breaths were measured earlier, spent 4 h in the mEEC. Each participant was instructed to take 10 puffs, 5 s duration, every 30 min from the same EVP described earlier. Room air samples were collected at six different locations inside the exposure chamber and in the air return line to provide an estimate of the average concentrations of constituents. Indoor air samples were collected over the 4 h duration and analyzed for nicotine and other constituents. In order to model this case, certain assumptions were made. The main assumptions were (1) participants exhale in the mEEC the same way as they exhaled in the exhaled breath study; (2) 90 puffs (9 participants, each taking 10 puffs) are distributed evenly over each 30 min of mEEC study. Both assumptions impose certain limitations on the accuracy of the model predictions. For example, the back pressure during exhalation into exhaled breath system causes the amount of exhaled aerosol to be different from the exhaled aerosol during normal use of e-vapor in the mEEC. Furthermore, there are puff by puff variation in the exhaled aerosol from an individual user and even more variability among users. Figure 6 shows the predicted concentration of nicotine in the mEEC The controlled clinical study was conducted in a mobile environmental exposure chamber (mEEC) (Inflamax Research, Mississauga, ON, Canada), as shown in Figure 5. The 113 m 3 mEEC was ventilated and conditioned for temperature. The air circulation rate was 1190 m 3 /h, of which 255 m 3 /h was fresh air that was mixed with the recirculated air. The ACH, based on the fresh air, was calculated to be 2.25 h −1 . Only the fresh-air rate was used for computational purposes, as the recirculated air does not have any significant effect on the total concentration of the constituents in the mEEC, other than contributing to a better mixing of air in the exposure chamber. The controlled clinical study was conducted in a mobile environmental exposure chamber (mEEC) (Inflamax Research, Mississauga, ON, Canada), as shown in Figure 5. The 113 m 3 mEEC was ventilated and conditioned for temperature. The air circulation rate was 1190 m 3 /h, of which 255 m 3 /h was fresh air that was mixed with the recirculated air. The ACH, based on the fresh air, was calculated to be 2.25 h −1 . Only the fresh-air rate was used for computational purposes, as the recirculated air does not have any significant effect on the total concentration of the constituents in the mEEC, other than contributing to a better mixing of air in the exposure chamber. In one study [30], the same 9 participants, whose exhaled breaths were measured earlier, spent 4 h in the mEEC. Each participant was instructed to take 10 puffs, 5 s duration, every 30 min from the same EVP described earlier. Room air samples were collected at six different locations inside the exposure chamber and in the air return line to provide an estimate of the average concentrations of constituents. Indoor air samples were collected over the 4 h duration and analyzed for nicotine and other constituents. In order to model this case, certain assumptions were made. The main assumptions were (1) participants exhale in the mEEC the same way as they exhaled in the exhaled breath study; (2) 90 puffs (9 participants, each taking 10 puffs) are distributed evenly over each 30 min of mEEC study. Both assumptions impose certain limitations on the accuracy of the model predictions. For example, the back pressure during exhalation into exhaled breath system causes the amount of exhaled aerosol to be different from the exhaled aerosol during normal use of e-vapor in the mEEC. Furthermore, there are puff by puff variation in the exhaled aerosol from an individual user and even more variability among users. Figure 6 shows the predicted concentration of nicotine in the mEEC under the conditions described earlier. For this modeled scenario, nicotine concentration rose over time and reached an equilibrium value of slightly over 3.5 µg/m 3 after about 100 min. After that, as In one study [30], the same 9 participants, whose exhaled breaths were measured earlier, spent 4 h in the mEEC. Each participant was instructed to take 10 puffs, 5 s duration, every 30 min from the same EVP described earlier. Room air samples were collected at six different locations inside the exposure chamber and in the air return line to provide an estimate of the average concentrations of constituents. Indoor air samples were collected over the 4 h duration and analyzed for nicotine and other constituents. In order to model this case, certain assumptions were made. The main assumptions were (1) participants exhale in the mEEC the same way as they exhaled in the exhaled breath study; (2) 90 puffs (9 participants, each taking 10 puffs) are distributed evenly over each 30 min of mEEC study. Both assumptions impose certain limitations on the accuracy of the model predictions. For example, the back pressure during exhalation into exhaled breath system causes the amount of exhaled aerosol to be different from the exhaled aerosol during normal use of e-vapor in the mEEC. Furthermore, there are puff by puff variation in the exhaled aerosol from an individual user and even more variability among users. Figure 6 shows the predicted concentration of nicotine in the mEEC under the conditions described earlier. For this modeled scenario, nicotine concentration rose over time and reached an equilibrium value of slightly over 3.5 µg/m 3 after about 100 min. After that, as long as EVP was being used at the same rate, the amount of aerosol emitted into the exposure chamber was balanced by the amount carried out by the ventilation air, and the concentration in the exposure chamber remained unchanged. Once the EVP use was stopped at 4 h, nicotine levels in the room declined rapidly within 1 h. Figure 7 compares the computational predictions with measured value of the average nicotine concentration in the exposure chamber over a 4 h period. The error bar on the experimental data corresponds to the standard deviation of the mean of three replicate runs. Despite the limiting assumptions used in the model development, the prediction is within the range of experimental variability. It is important to note that both the modeled and experimental values are extremely low, and are below the detection limit of method recommended by National Institute of Occupational Safety and Health for indoor nicotine measurement, which is 15 µg/m 3 (our limit of detection for measurement was 0.25 µg/m 3 ). They are also well below the US Department of Labor Occupational Safety and Health Administration permissible exposure limit of 500 µg/m 3 [31]. The well-mixed model introduced here is not capable of answering the specific question stated at the top of the introduction section "how far do aerosols travel in a confined environment?" To answer this question, a CFD-based distributed model is needed. Results from a distributed model will be presented separately [30]. Now that the well-mixed model has been validated to predict the average room level of nicotine over a prescribed period of time, we will use the model to estimate the room level of nicotine, propylene glycol, and glycerol under different hypothetical conditions. Figure 7 compares the computational predictions with measured value of the average nicotine concentration in the exposure chamber over a 4 h period. The error bar on the experimental data corresponds to the standard deviation of the mean of three replicate runs. Despite the limiting assumptions used in the model development, the prediction is within the range of experimental variability. It is important to note that both the modeled and experimental values are extremely low, and are below the detection limit of method recommended by National Institute of Occupational Safety and Health for indoor nicotine measurement, which is 15 µg/m 3 (our limit of detection for measurement was 0.25 µg/m 3 ). They are also well below the US Department of Labor Occupational Safety and Health Administration permissible exposure limit of 500 µg/m 3 [31]. Figure 7 compares the computational predictions with measured value of the average nicotine concentration in the exposure chamber over a 4 h period. The error bar on the experimental data corresponds to the standard deviation of the mean of three replicate runs. Despite the limiting assumptions used in the model development, the prediction is within the range of experimental variability. It is important to note that both the modeled and experimental values are extremely low, and are below the detection limit of method recommended by National Institute of Occupational Safety and Health for indoor nicotine measurement, which is 15 µg/m 3 (our limit of detection for measurement was 0.25 µg/m 3 ). They are also well below the US Department of Labor Occupational Safety and Health Administration permissible exposure limit of 500 µg/m 3 [31]. The well-mixed model introduced here is not capable of answering the specific question stated at the top of the introduction section "how far do aerosols travel in a confined environment?" To answer this question, a CFD-based distributed model is needed. Results from a distributed model will be presented separately [30]. Now that the well-mixed model has been validated to predict the average room level of nicotine over a prescribed period of time, we will use the model to estimate the room level of nicotine, propylene glycol, and glycerol under different hypothetical conditions. The well-mixed model introduced here is not capable of answering the specific question stated at the top of the introduction section "how far do aerosols travel in a confined environment?" To answer this question, a CFD-based distributed model is needed. Results from a distributed model will be presented separately [30]. Now that the well-mixed model has been validated to predict the average room level of nicotine over a prescribed period of time, we will use the model to estimate the room level of nicotine, propylene glycol, and glycerol under different hypothetical conditions. Examples of Sensitivity Analysis After demonstrating that the model can reasonably predict the indoor nicotine concentration under different EVP aerosol source conditions (smoking machine-generated and exhaled aerosols), we used the model to evaluate the effects of different conditions. These analyses are based on the study input data from our controlled clinical study described earlier. In Figure 8a, a hypothetical scenario is considered, where all 9 study participants are assumed to exhale 16% of inhaled nicotine (the highest exhaled % in Figure 4). In the average exhale case, each participant exhaled the average exhale ratio of 3.4%. The results show that predicted equilibrium nicotine concentration after 100 min almost linearly increases with the amount of nicotine exhaled into the exposure chamber. we used the model to evaluate the effects of different conditions. These analyses are based on the study input data from our controlled clinical study described earlier. In Figure 8a, a hypothetical scenario is considered, where all 9 study participants are assumed to exhale 16% of inhaled nicotine (the highest exhaled % in Figure 4). In the average exhale case, each participant exhaled the average exhale ratio of 3.4%. The results show that predicted equilibrium nicotine concentration after 100 min almost linearly increases with the amount of nicotine exhaled into the exposure chamber. Figure 8b shows the effect of another usage variability: the number of puffs. Instead of taking 20 puffs/h, it is assumed that participants take 10 puffs/h. As expected, the predicted indoor air nicotine concentration drops almost 50% to a steady-state value of 1.6 µg/m 3 . Figure 8b shows the steady state concentration of nicotine is linearly proportional to the number of puffs; doubling the number of puffs almost doubles the concentration. The effect of air exchange rate is shown in Figure 8c. Increasing the ACH from 2.25 to 5 reduces the predicted concentration proportionally, as the amount of nicotine carried out by the ventilation air increases, with less remaining indoor. As shown in Figure 8c, the predicted steady-state condition is reached in a shorter time (50 min for 5 ACH vs. 90 min for 2.25 ACH). Figure 9 shows predicted exposure chamber concentration of nicotine when the EVP is used only during the first hour of a 4 h period. In this case, the concentration drops exponentially with time, and it takes about 2 h to go back to baseline. Figure 8b shows the effect of another usage variability: the number of puffs. Instead of taking 20 puffs/h, it is assumed that participants take 10 puffs/h. As expected, the predicted indoor air nicotine concentration drops almost 50% to a steady-state value of 1.6 µg/m 3 . Figure 8b shows the steady state concentration of nicotine is linearly proportional to the number of puffs; doubling the number of puffs almost doubles the concentration. The effect of air exchange rate is shown in Figure 8c. Increasing the ACH from 2.25 to 5 reduces the predicted concentration proportionally, as the amount of nicotine carried out by the ventilation air increases, with less remaining indoor. As shown in Figure 8c, the predicted steady-state condition is reached in a shorter time (50 min for 5 ACH vs. 90 min for 2.25 ACH). Figure 9 shows predicted exposure chamber concentration of nicotine when the EVP is used only during the first hour of a 4 h period. In this case, the concentration drops exponentially with time, and it takes about 2 h to go back to baseline. The model may be used for prediction of other constituents in room air. Figure 10 shows such predictions for glycerol and propylene glycol levels in mEEC. The transient behaviors are similar to the nicotine concentration, with the higher values attributed to higher concentrations of propylene glycol and glycerol in the e-liquid. Finally, it is worth pointing out that according to the model predictions, aerosol constituents rapidly evaporate resulting in almost 100% of each constituent in vapor phase. As a result, the particle mean diameter drops from the initial value of 0.5 µm to a nanometer size in a short time. This is consistent with Bertholon et al. [14] and Fernandez et al. [32] measurements that the half-life of exhaled e-cigarette aerosol is short, typically about 10 s, which corresponds to the time steps used in this model calculation. In reality, particles are visible in the vicinity of the exhaled position at short times, but they disappear visibly as they travel farther away from the source and mix with the room air. The spatial variation of particle size and concentration cannot be predicted by the current well-mixed model. That information, along with spatial variation of the room level of constituents, is the subject of a distributed model that can accurately capture the temporal and spatial mixing process. We have also developed a CFD-based distributed model, which will be published separately. Figure 8b shows the effect of another usage variability: the number of puffs. Instead of taking 20 puffs/h, it is assumed that participants take 10 puffs/h. As expected, the predicted indoor air nicotine concentration drops almost 50% to a steady-state value of 1.6 µg/m 3 . Figure 8b shows the steady state concentration of nicotine is linearly proportional to the number of puffs; doubling the number of puffs almost doubles the concentration. The effect of air exchange rate is shown in Figure 8c. Increasing the ACH from 2.25 to 5 reduces the predicted concentration proportionally, as the amount of nicotine carried out by the ventilation air increases, with less remaining indoor. As shown in Figure 8c, the predicted steady-state condition is reached in a shorter time (50 min for 5 ACH vs. 90 min for 2.25 ACH). Figure 9 shows predicted exposure chamber concentration of nicotine when the EVP is used only during the first hour of a 4 h period. In this case, the concentration drops exponentially with time, and it takes about 2 h to go back to baseline. Finally, it is worth pointing out that according to the model predictions, aerosol constituents rapidly evaporate resulting in almost 100% of each constituent in vapor phase. As a result, the particle mean diameter drops from the initial value of 0.5 µm to a nanometer size in a short time. This is consistent with Bertholon et al. [14] and Fernandez et al. [32] measurements that the half-life of exhaled e-cigarette aerosol is short, typically about 10 s, which corresponds to the time steps used in this model calculation. In reality, particles are visible in the vicinity of the exhaled position at short times, but they disappear visibly as they travel farther away from the source and mix with the room air. The spatial variation of particle size and concentration cannot be predicted by the current well-mixed model. That information, along with spatial variation of the room level of constituents, is the subject of a distributed model that can accurately capture the temporal and spatial mixing process. We have also developed a CFD-based distributed model, which will be published separately. Conclusions A well-mixed computational model has been developed using principles similar to those referenced by EPA for indoor air quality analysis where EVPs are used. The mechanistic model is based on physical and thermodynamic interactions between air, vapor, and particles in aerosol. The results from a well-mixed model were presented, and they agree with measured values of nicotine concentration in indoor spaces following the release of aerosols from two different sources: smoking Conclusions A well-mixed computational model has been developed using principles similar to those referenced by EPA for indoor air quality analysis where EVPs are used. The mechanistic model is based on physical and thermodynamic interactions between air, vapor, and particles in aerosol. The results from a well-mixed model were presented, and they agree with measured values of nicotine concentration in indoor spaces following the release of aerosols from two different sources: smoking machine-generated and exhaled aerosol. The model introduced in this study can serve as a useful tool to estimate the level of constituents from exhaled EVP aerosols under a wide variety of usage conditions and different types of confined spaces e.g., cars, or large commercial rooms, where accurate measurement is difficult and resource intensive. Author Contributions: Ali A. Rostami developed the model and generated data. He also provided guidance during design of the experimental study and contributed to writing of the manuscript. Yezdi B. Pithawalla contributed to the model conceptualization, experimental designs, and writing of the manuscript, Jianmin Liu, Michael J. Oldham, Karl A. Wagner, and Mohamadi A. Sarkar provided experimental data, and Kimberly Frost-Pineda helped write part of the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: Nomenclature The following variables are used in this manuscript: γ i activity coefficient (dimensionless) x i mole fraction of i in particle, liquid phase y i mass fraction of i in particle, liquid phase P sat saturation pressure at given temperature (kPa) R universal gas constant (kJ/(kmol·K)) T temperature (K) M i molecular mass of i (kg/kmol) M molecular mass of the mixture in particle (liquid phase) (kg/kmol) ρ density of vapor of i in air (kg/m 3 )
2016-08-24T23:09:51.855Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "ccb025b1759a6cea30756ecd95b00455ae0c173c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/13/8/828/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccb025b1759a6cea30756ecd95b00455ae0c173c", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
234779470
pes2o/s2orc
v3-fos-license
Hepatic parenchyma and vascular blood flow changes after TIPS with spectral CT iodine density in HBV-related liver cirrhosis To compare changes in spectral CT iodine densities of hepatic parenchyma and vessels before and after transjugular intrahepatic portosystemic shunt (TIPS) in hepatitis B virus (HBV)-related liver cirrhosis. Twenty-five patients with HBV-related liver cirrhosis who received TIPS for gastroesophageal varices bleeding were recruited. Each patient underwent three phases contrast CT before and after TIPS within 4 weeks, with the raw data reconstructed at 1.25-mm-thick slices. Iodine density (in milligrams per milliliter) was measured on iodine-based material decomposition image. Multiple regions of interest (ROIs) in liver parenchyma, aorta and portal vein were selected from three slices of images. Portal vein trunk was set as the central one, and mean liver parenchymal iodine densities from arterial phase (AP), venous phase (VP) and equilibrium phase (EP) were recorded. Quantitative indices of iodine density (ID), including normalized ID in liver parenchyma for arterial phase (NIDLAP), ID of liver parenchyma for venous phase (IDLVP), ID of portal vein in venous phase (IDPVP) and liver arterial iodine density fraction (AIF), were measured and compared before and after TIPS. Based on Child–Pugh stage, 4, 12 and 9 patients were classified as grade A, B, and C, respectively. Liver volume was comparable before and after TIPS (1110.5 ± 287.4 vs. 1092.0 ± 276.3, P = 0.28). After TIPS, ID was decreased in aorta (146.0 ± 34.5 vs. 120.9 ± 30.7, P < 0.01) whereas increased in liver parenchyma at arterial phase, as demonstrated by IDAP (9.3 ± 3.1 vs. 13.4 ± 4.4 mg/mL) and AIF (0.40 ± 0.11 vs. 0.58 ± 0.11, P < 0.01). For venous or equilibrium phase, quantitative indices remained stable (23.1 ± 4.5 vs. 23.0 ± 5.3, 19.8 ± 4.1 vs. 19.4 ± 4.6) mg/mL (Ps > 0.05). For portal vein, ID and NID were increased after TIPS (23.1 ± 11.7 vs. 36.5 ± 13.0, 16.4 ± 8.5 vs. 31.8 ± 12.8) (P < 0.01). No positive correlation between iodine density and preoperative Child–Pugh score was observed. Based on iodine density measurement, spectral CT as a noninvasive imaging modality may assess hepatic parenchyma and vascular blood flow changes before and after TIPS in HBV-related liver cirrhosis. Clinical registration number: ChiCTR- DDC-16009986. www.nature.com/scientificreports/ CT has been applied as a quantitative imaging tool in liver lesions, such as hemangioma and hepatocellular carcinomas, with increased sensitivity for differential diagnosis [10][11][12] . Furthermore, Based on iodine density from material decomposition, spectral CT exhibits capability in quantifying liver fat concentration and staging in liver cirrhosis 13,14 . Therefore, our purpose is to investigate potential feasibility of spectral CT iodine density as a non-invasive imaging modality in assessment of hepatic blood flow changes after TIPS in patients with HBVrelated liver cirrhosis. Materials and methods Patients. This study has been performed in accordance with Declaration of Helsinki, and approved by institutional ethics committee in our hospital (Beijing Shijitan hospital, Capital Medical University) complying with Ethical Principles for Medical Research Involving Human subjects, and registered in Clinical Trial Registry with number of ChiCTR-DDC-16009986. All the examinations Informed consent was signed by each patient. From January to May in 2019, all patients with gastroesophageal bleeding resulted from hepatitis B were treated with TIPS. The inclusion criteria were as follows: (1) patients with HBV-related liver cirrhosis, (2) Contrast CT was performed within 4 weeks before and after TIPS, and (3) Child-Pugh score was evaluated within 2 weeks before TIPS. Exclusive criteria were as follows: (1) patients with malignant hepatic tumors, either primary or metastatic, (2) any conditions affecting liver blood flows, including iatrogenic (liver surgery, splenectomy, and TIPS) or portal vein lesions (portal venous thrombosis and portal cavernous transformation), (3) patients with allergy to iodinated contrast media, (4) estimated glomerular filtration rate (GFR) lower than 30 mL/min, and (5) severe motion artifacts. The complications, such as hepatic encephalopathy, coma, and liver failure, were recorded during 4-week follow-up. Spectral CT examination and quantitative indices measurement. Quadruple-phase (pre-contrast, arterial, venous and equilibrium phase) contrast enhanced CT (Revolution, GE Healthcare, WI) was performed [14][15][16] . All patients were scanned with spectral imaging mode with scanning parameters as follows: fast switch tube voltage 80/140 kVp, automatic tube current from 100 to 600 mA with noise index set as 9, 8 cm detector, slice thickness of 5 mm, rotation speed of 0.5 s, helical pitch of 0.992:1, and 40% Asir. Nonionic contrast media (Omnipaque 350) were injected through antecubital vein at a rate of 5 mL/sec, with a total volume of 80-120 mL (1.5 mL per kilogram of body weight). Hepatic arterial phase (AP) imaging was determined by automatic scan triggering software (SmartPrep; Revolution CT, GE Healthcare, WI, https:// www. gehea lthca re. com/) when trigger attenuation threshold (120 HU) reached the level of supraceliac abdominal aorta, while portal venous and equilibrium phase was initiated at 45 and 120 s after AP phase, respectively. All raw data were reconstructed with 2.5 -mm-thick slices. Then, monochromatic images at 70 keV, water-and iodine-based material decomposition images were analyzed. All post-processions were performed in Advanced Workstation (Version 4.7, GE Healthcare, WI, https:// www. gehea lthca re. com/). Iodine density measurement. Iodine densities (in milligrams per milliliter) were measured on iodine-based material decomposition images, including non-contrast, AP and PVP phases. Multiple regions of interest (ROI) (mean area larger than 100) were placed in liver parenchyma from different hepatic lobes, including lateral, medial, anterior, and posterior segments at the level of hepatic hilum, with large vessels, liver cyst, calcification and prominent artifacts to be avoided carefully. All ROIs were placed at 3 different levels, with hepatic hilar serving as the central one. Furthermore, size, shape, and position of ROIs were kept consistent among images by applying copy-and-paste function. Then, an average value was calculated as iodine density (ID) 15 . Quantitative indices of ID were measured according to Dong's report 15 as follows: (1) ID of liver parenchyma at arterial phase (ID LAP ) or venous phase (ID LVP ) was calculated as the difference between AP or VP and non-contrast phase, respectively. (2) ID of aorta in AP (ID AO ) and ID of portal vein in VP (ID PVP ) were recorded. (3) Normalized ID was defined as NID LAP = ID LAP /ID AO. (4) Liver arterial iodine density fraction (AIF) was defined as: AIF = ID LAP / ID LVP (Fig. 1). www.nature.com/scientificreports/ Liver volume measurement. Liver volume was measured on images of enhanced CT at venous phase 17,18 . First, the images of contrast CT venous phase were analyzed by software named as total liver and segment separation in Advanced Workstation (Version 4.7, GE Healthcare, WI, https:// www. gehea lthca re. com/), then "activate AutoSelect tool" was applied to adjust the edge of the liver by radiologists slice by slice, and finally the liver volume would be calculated automatically (Fig. 2). Statistical analyses. Statistical analysis was carried out using SPSS 22.0 software (SPSS, Inc., Chicago, Illinois, USA, https:// www. ibm. com/ produ cts/ spss-stati stics). Pair-sample t-tests were used to compare quantitative indices before and after TIPS. Pearson correlation analyses were performed to assess associations between Child-Pugh scores and quantitative indices before TIPS. A P < 0.05 was set as statistical significance. Results Totally, 25 patients (F/M, 7/18, age range, 29-74 years) were enrolled in our study. Based on Child-Pugh stage, 4, 12 and 9 patients were classified as grade A, B and C, respectively, before TIPS. Six patients were treated with TIPS only, whereas 21 with TIPS combined gastric coronary vein embolization. Two patients developed hepatic encephalopathy within 2 weeks after surgery, and recovered in 4-week follow-up with conservative therapy. No hepatic coma or liver failure occurred. Quantitative indices from spectral CT were compared before and after TIPS. Liver volume remained stable before and after TIPS (1110.5 ± 287.4 vs. 1092.0 ± 276.3, P = 0.28). ID in liver parenchyma, NIDL AP and AIF were increased after TIPS. By contrast, ID in liver parenchyma at venous or equilibrium phase was stable after TIPS. ID in aorta was decreased after TIPS. For portal vein, ID and NIDPV AP were increased, while ID at venous or equilibrium phase was stable after TIPS (Table 1). No positive correlation of iodine density with preoperative Child-Pugh score was observed. Discussion TIPS is an effective therapy for gastroesophageal varices bleeding caused by portal hypertension in liver cirrhosis 1,[5][6][7] . Once shunts between portal vein and inferior vena cava are established, portal hypertension would be alleviated, so that the risk of gastroesophageal varices bleeding would be reduced 5,6 . However, portosystemic shunt after TIPS can cause further reduction in hepatic blood flow, which impairs hepatic detoxification, and thus induces complications such as hepatic encephalopathy and liver failure 2 . In our study, ID of liver parenchyma or portal vein at arterial phase would obviously increase, whereas peak value of aorta would decrease. By contrast, ID in liver parenchyma or portal vein remains stable at venous or equilibrium phase after TIPS. Therefore, ID measured on spectral CT has great potential for non-invasive quantitative evaluation of liver blood flow changes after TIPS. Spectral CT has been reported as a noninvasive tool for quantitative assessment of liver fibrosis [14][15][16] . In our study, ID LAP , NID LAP and AIF were increased significantly after TIPS, suggesting that arterial blood supply would be increased in liver parenchyma after TIPS. The liver is a solid organ with dual blood supply, i.e., hepatic artery and portal vein. Under normal circumstances, hepatic artery accounts for about 25% of total blood supply. While in patients with liver cirrhosis, the proportion of arterial blood supply is increased significantly (0.40 ± 0.11), which is consistent with an increased AIF as reported by Dong et al. 15 . However, after TIPS, shunts can cause partial portal vein blood to flow directly back into systemic circulation, so that portal vein blood supply to liver parenchyma is further reduced. With increased arterial and decreased portal vein blood supply, AIF would be further increased after TIPS (0.40 ± 0.11 vs. 0.58 ± 0.11, P < 0.01). Therefore, dynamic changes in hepatic blood flows at arterial phase can be evaluated by spectral CT iodine density in hepatic parenchyma as a noninvasive assessment. In our study, no statistical difference in iodine density was observed between portal vein and equilibrium phases in liver parenchyma after TIPS, which is inconsistent with previous reports, where blood supply of liver parenchyma was decreased significantly with CT perfusion 8,9 . This inconsistency may be related to abnormal hepatic blood flow and pharmacokinetics of contrast medium in liver cirrhosis. Because iodine density was calculated at venous or equilibrium phase set at 70 s and 120 s, respectively, and it only reflects static distribution of contrast media in liver parenchyma at that moment. Due to the distorted structures of pseudo-lobules in HBV-related liver cirrhosis, normal wash-in and wash-out of contrast media would be disturbed. Besides, common vascular distortion includes arteria-portal fistula, arteriovenous fistula, and abnormal perfusion of liver, could also be detected in liver cirrhosis. Thus, it is impossible to quantify actual blood supply in liver parenchyma from portal vein perfusion with iodine density at venous or equilibrium phase at each time point. Further research, such as CT and MR perfusion for specific quantitative analysis, is needed to evaluate actual changes in liver blood supply 4,9 . Dynamic changes in ID exhibited different trends in hepatic vascular system on three-phase contrast CT. In our study, ID was decreased in aorta (146.0 ± 34.5 vs. 120.9 ± 30.7, P < 0.01), whereas increased in portal vein (23.1 ± 11.7 vs. 36.5 ± 13.0, P < 0.01) or NIDPV AP (16.4 ± 8.5 vs. 31.8 ± 12.8, P < 0.01) at arterial phase. By contrast, ID in portal vein at venous phase remained stable (55.5 ± 9.1 vs. 53.0 ± 10.8, P = 0.17). Thus, both portal and systematic circulation would have been affected after TIPS, especially at arterial phase. Portal-systemic shunts would result in hepatic blood flow and pharmacokinetic changes in contrast media, however, biopathology for this consequence needs further investigation. In our study, spectral CT ID in liver parenchyma or blood vessels shows no positive correlation with preoperative Child-Pugh score, which is inconsistent with previous reports 15 . This may be related to adoption of Child-Pugh score instead of Child-Pugh grading. Besides, liver volume (1110.5 ± 287.4 vs.1092.0 ± 276.3, P = 0.28) was stable after TIPS. However, 2 patients developed hepatic encephalopathy within 2 weeks after TIPS. Interestingly, for them, liver volume was decreased by more than 10 percent, whereas blood ammonia was increased, as potential indication for quantitative assessment of TIPS-related complications. There are some limitations in our study. Firstly, contrast enhanced CT is performed at three time points, including arterial, venous and equilibrium phases, however, ID only reflects static snapshot of blood distribution in liver parenchyma at a specific time point, rather than actual blood perfusion. So, ID is indirect reflection of blood supply in liver parenchyma, and more studies should be performed to quantify actual blood perfusion, especially in comparison of CT and MR perfusion for liver blood flow. Secondly, our study has been focused on HBV-related cirrhosis, whether iodine density could be used in other diseases, such as hepatitis C, alcoholic hepatitis and autoimmune hepatitis, needs further investigation. Thirdly, ID shows no correlation with Child-Pugh score in our study, which is inconsistent with Dong's report. This may be related to the small sample size, so studies in larger patient population should be performed in the future. In conclusion, spectral CT iodine density demonstrates increased blood supply in liver parenchyma and portal vein at arterial phase after TIPS, which has potential capability to evaluate hepatic blood flow changes in HBV-related portal hypertension and liver cirrhosis as a non-invasive quantitative imaging modality.
2021-05-20T06:16:19.119Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "040f37c444ee6987d7e31120b7b65f3447e8945e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-89764-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72574b07bccf5da588cfd5698772c722f28eb7ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215809237
pes2o/s2orc
v3-fos-license
In Reply: Precautions for Endoscopic Transnasal Skull Base Surgery During the COVID-19 Pandemic. To the Editor: COVID-19 has been spreading all over the world over the past 2 mo.1 Owing to the striking increase of COVID-19 cases, the safety of medical workers is a concern.2 Because the virus exists in all parts of the respiratory tract, there is a heated discussion on the timing of surgical treatment of respiratory diseases, especially the safety assessment of endoscopic transsphenoidal surgery in the department of neurosurgery. Recently, Patel et al3 submitted an article titled “Precautions for Endoscopic Transnasal Skull Base Surgery During the COVID-19 Pandemic” to remind the neurosurgeon and otolaryngologist to pay attention to the extended endoscopic skull base surgery of patients with COVID-19. In the article, Patel et al3 cited the co-occurrence of 14 COVID-19 infected medical workers and a COVID-19 affected patient with pituitary adenoma who underwent endoscopic transsphenoidal surgery in our department, and stated the safety issue about the transsphenoidal surgery in this emerging COVID-19 situation. However, what was described does not accord with the facts. The first argument is about the sentence “multiple members (>14 by report) of the patient care team, both within and outside of the operating room, became infected from what became recognized as human-to-human transmission of COVID-19”. It is not accurate. At the early stage of the COVID-19 outbreak, we had 1 patient who underwent endoscopic transsphenoidal surgery on January 6, 2020 and was diagnosed with COVID-19 13 d later. Among the infected medical workers, 10 nurses and 4 neurosurgeons were diagnosed and only 4 nurses contacted the COVID-19 patient directly. The second problem is that the authors3 believed that all the medical workers who participated in the surgery were infected, especially from the experience of the second case that the author cited, for which we have no exact information in Wuhan neurosurgery medical system. However, according to our retrospective survey on our case, none of medical staff who participated in surgery were diagnosed with COVID-19 until March 31, 2020. Today, all the infected medical staff have recovered. More importantly, the medical workers diagnosed with COVID-19 in our department later were the staff who were outside the operation room. As for the infected neurosurgeon in our department, it’s conceivable to be deemed as postoperative transmission rather than intraoperative transmission. Finally, the opinion that the authors3 delivered should be carefully assessed. The reason why the neurosurgeon and otolaryngologist were infected needs more data to illustrate. According to the whole infection event that we experienced, we have some facts and experiences to share with the medical community. The reason why the infection event happened in our department at the early stage is due to little knowledge about COVID-19 and insufficient protective measures. Besides, the frequently interaction between medical workers in our department promoted transmission. Thus, accumulating information about the COVID-19 should be elucidated and reducing contact between people is a necessary means to prevent the spread of the virus. In this infection event, more nurses were infected than surgeons, because nurses and patients are in direct contact, such as in daily medical care. So, compared to droplet transmission, contact transmission may be an important factor of transmission inmedical workers whichmore likely we ignored at the early stage. Therefore, it is very important to wash hands and clean the surface of objects in wards and living areas.What’s more, it is vital tomake sure that once COVID-19 patients are confirmed, strict isolation measures must be taken as soon as possible. As for the transsphenoidal surgery, Patel et al3 believe that aerosol droplets coming from the endonasal surgery will increase the possibility of infection of medical staff in operating room. However, from our case, we have learned that intraoperative aspirator, protective clothing, N95 mask, and face shield can provide sufficient protection to our medical staff in the surgery room. What Patel et al3 claimed in their work might provoke unnecessary anxiety toward endonasal endoscopic procedures based on an anecdotal statement. In sum, as for medical staff, proper protective measures including N95 masks, face shield, protective clothing, and reduced contact with infected patients are necessary. No convincing evidence exists to show that there is an increased possibility of infection from the endoscopic transsphenoidal surgery under the above protective measures. At this emerging COVID-19 situation and for patients’ safety, our advice is to avoid selective endoscopic transsphenoidal surgery unless in an emergency case, in which situation level-3 protection is definitely needed and a negative pressure operating room is recommended. To the Editor: COVID-19 has been spreading all over the world over the past 2 mo. 1 Owing to the striking increase of COVID-19 cases, the safety of medical workers is a concern. 2 Because the virus exists in all parts of the respiratory tract, there is a heated discussion on the timing of surgical treatment of respiratory diseases, especially the safety assessment of endoscopic transsphenoidal surgery in the department of neurosurgery. Recently, Patel et al 3 submitted an article titled "Precautions for Endoscopic Transnasal Skull Base Surgery During the COVID-19 Pandemic" to remind the neurosurgeon and otolaryngologist to pay attention to the extended endoscopic skull base surgery of patients with COVID-19. In the article, Patel et al 3 cited the co-occurrence of 14 COVID-19 infected medical workers and a COVID-19 affected patient with pituitary adenoma who underwent endoscopic transsphenoidal surgery in our department, and stated the safety issue about the transsphenoidal surgery in this emerging COVID-19 situation. However, what was described does not accord with the facts. The first argument is about the sentence "multiple members (>14 by report) of the patient care team, both within and outside of the operating room, became infected from what became recognized as human-to-human transmission of COVID-19". It is not accurate. At the early stage of the COVID-19 outbreak, we had 1 patient who underwent endoscopic transsphenoidal surgery on January 6, 2020 and was diagnosed with COVID-19 13 d later. Among the infected medical workers, 10 nurses and 4 neurosurgeons were diagnosed and only 4 nurses contacted the COVID-19 patient directly. The second problem is that the authors 3 believed that all the medical workers who participated in the surgery were infected, especially from the experience of the second case that the author cited, for which we have no exact information in Wuhan neurosurgery medical system. However, according to our retrospective survey on our case, none of medical staff who participated in surgery were diagnosed with COVID-19 until March 31, 2020. Today, all the infected medical staff have recovered. More importantly, the medical workers diagnosed with COVID-19 in our department later were the staff who were outside the operation room. As for the infected neurosurgeon in our department, it's conceivable to be deemed as postoperative transmission rather than intraoperative transmission. Finally, the opinion that the authors 3 delivered should be carefully assessed. The reason why the neurosurgeon and otolaryngologist were infected needs more data to illustrate. According to the whole infection event that we experienced, we have some facts and experiences to share with the medical community. The reason why the infection event happened in our department at the early stage is due to little knowledge about COVID-19 and insufficient protective measures. Besides, the frequently interaction between medical workers in our department promoted transmission. Thus, accumulating information about the COVID-19 should be elucidated and reducing contact between people is a necessary means to prevent the spread of the virus. In this infection event, more nurses were infected than surgeons, because nurses and patients are in direct contact, such as in daily medical care. So, compared to droplet transmission, contact transmission may be an important factor of transmission in medical workers which more likely we ignored at the early stage. Therefore, it is very important to wash hands and clean the surface of objects in wards and living areas. What's more, it is vital to make sure that once COVID-19 patients are confirmed, strict isolation measures must be taken as soon as possible. As for the transsphenoidal surgery, Patel et al 3 believe that aerosol droplets coming from the endonasal surgery will increase the possibility of infection of medical staff in operating room. However, from our case, we have learned that intraoperative aspirator, protective clothing, N95 mask, and face shield can provide sufficient protection to our medical staff in the surgery room. What Patel et al 3 claimed in their work might provoke unnecessary anxiety toward endonasal endoscopic procedures based on an anecdotal statement. In sum, as for medical staff, proper protective measures including N95 masks, face shield, protective clothing, and reduced contact with infected patients are necessary. No convincing evidence exists to show that there is an increased possibility of infection from the endoscopic transsphenoidal surgery under the above protective measures. At this emerging COVID-19 situation and for patients' safety, our advice is to avoid selective endoscopic transsphenoidal surgery unless in an emergency case, in which situation level-3 protection is definitely needed and a negative pressure operating room is recommended. Disclosures The retrospective survey in the letter was supported by the National Natural Science Foundation of China (grant 81272778 and 81974390 to Dr X. Jiang) and the Fundamental Research Funds for the Central Universities (grant 2020kfyXGYJ010 to Dr X. Jiang). The authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article. Department of Neurosurgery Union Hospital Tongji Medical College Huazhong University of Science and Technology Wuhan, China
2020-04-16T09:03:06.160Z
2020-04-17T00:00:00.000
{ "year": 2020, "sha1": "18c5a0008af99bb1d792286abc8d72b9f468ec4a", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7188152?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9c309bfc1bf44d4f92176d495c4e7c7ec5080ebd", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
84674814
pes2o/s2orc
v3-fos-license
The Lipocalin α1-Microglobulin Has Radical Scavenging Activity* The lipocalin α1-microglobulin (α1m) is a 26-kDa glycoprotein present in plasma and in interstitial fluids of all tissues. The protein was recently shown to have reductase properties, reducing heme-proteins and other substrates, and was also reported to be involved in binding and scavenging of heme and tryptophan metabolites. To investigate its possible role as a reductant of organic radicals, we have studied the interaction of α1m with the synthetic radical, 2,2′-azino-bis-(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS radical). The lipocalin readily reacted with the ABTS radical forming reduced ABTS. The apparent rate constant for this reaction was 6.3 ± 2.5 × 103 m-1 s-1. A second reaction product with an intense purple color and an absorbance maximum at 550 nm was formed at a similar rate. This was shown by liquid chromatography/mass spectrometry to be derived from covalent attachment of a portion of ABTS radical to tyrosine residues on α1m. The relative yields of reduced ABTS and the purple ABTS derivative bound to α1m were ∼2:1. Both reactions were dependent on the thiolate group of the cysteine residue in position 34 of the α1m polypeptide. Our results indicate that α1m is involved in a sequential reduction of ABTS radicals followed by trapping of these radicals by covalent attachment. In combination with the reported physiological properties of the protein, our results suggest that α1m may be a radical reductant and scavenger in vivo. The lipocalins are a protein superfamily with 30 -35 members distributed among animals, plants, and bacteria (1,2). The members of the superfamily have a highly conserved three-dimensional structure, 8 -9 antiparallel ␤-strands folded into a barrel with one closed and one open end. The interior of the barrel forms a binding site for small hydrophobic ligands, and this structural property is the basis for a surprisingly wide array of biological functions. So far, three lipocalins have been shown to be enzymes: prostaglandin D-synthase (3), violaxanthin de-epoxygenase in plants (4), and ␣ 1 -microglobulin (␣ 1 m) 2 (5-7), which was recently shown to have reductase/dehydrogenase properties (8). Also called protein HC (9), ␣ 1 m is one of the originally described lipocalins (10) and is one of the most widespread lipocalins phylogenetically (7). So far, it has been found in mammals, birds, fish, and amphibians. The protein is synthesized by the liver (11), rapidly distributed by the blood to the extravascular compartment (12), and found in most organs in interstitial fluids, connective tissue, and basement membranes (13)(14)(15). It is especially abundant at interfaces between the cells of the body and the environment, such as in lungs, intestine, kidneys, and placenta (16 -18). Due to its small size, 26 kDa, ␣ 1 m is rapidly cleared from the blood by glomerular filtration. Most of the filtrated ␣ 1 m is degraded in the kidneys, but a small part is excreted in the urine (12). ␣ 1 m isolated from plasma and urine is yellow-brown and displays charge heterogeneity (i.e. a broad band upon electrophoresis) (19). This is caused by an array of small chromophoric groups attached to the amino acid residues Cys-34, Lys-92, Lys-118, and Lys-130, which are localized around the entrance of the lipocalin pocket (20 -22). The biological function of ␣ 1 m is unknown, although it has a number of immunosuppressive properties, such as inhibition of antigen-induced lymphocyte cell proliferation, cytokine secretion (23)(24)(25), and the oxidative burst of neutrophils (26). Several recent findings suggest that ␣ 1 m is involved in reduction and scavenging of biological pro-oxidants, such as heme and heme-proteins. First, it was shown that ␣ 1 m binds heme strongly and obtains the yellow-brown chromophore by incubation with hemoglobin or erythrocyte ghosts, concomitant with degradation of the bound heme (27). A processed form, t-␣ 1 m, which lacks the C-terminal tetrapeptide LIPR and has enhanced heme degradation properties, is also induced by incubation with hemoglobin. t-␣ 1 m is found in urine (27) and continuously forms in chronic leg ulcers, a hemolytic inflammatory condition where free heme and iron are considered to be oxidative pathogenic factors (28). Second, lysyl residues in urine ␣ 1 m from hemodialysis patients were found to be modified by kynurenine derivates (29). These are tryptophan catabolites that have a propensity to form free radicals (30 -32) and are present at elevated concentrations in plasma of hemodialysis * This work was supported by grants from the Swedish Research Council (Project 7144), the Health Research Council of New Zealand and The New Zealand Centre of Research Excellence for Growth and Development, the Swedish Society for Medical Research, the Royal Physiographic Society in Lund, the Foundations of Greta and Johan Kock and Alfred Ö sterlund, the Swedish Foundation for International Cooperation in Research and Higher Education (STINT), and the Blood and Defense Network, Lund University. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 To whom correspondence should be addressed. patients (33). Fourth, ␣ 1 m was shown to enzymatically reduce cytochrome c, methemoglobin, nitro blue tetrazolium, and free iron, using NADH, NADPH, or ascorbate as electron-donating co-factors (8). The thiol group in position Cys-34 and the three lysyls Lys-92, Lys-118, and Lys-130 were implicated in the active site. Finally, the cellular expression of ␣ 1 m is up-regulated by hemoglobin and radical oxygen species (34). These reports suggest that ␣ 1 m could potentially undergo reactions with biological radicals and that these reactions may be related to its physiological function. To investigate how ␣ 1 m reacts with radicals, we have studied the interaction of ␣ 1 m with the stable radical, 2,2Ј-azino-bis-(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS). This compound has been used extensively to investigate antioxidant mechanisms (e.g. see Ref. 35). We show that ␣ 1 m reduces the ABTS radical and simultaneously covalently binds to the radical, forming a distinct purple adduct. The results suggest that the lipocalin ␣ 1 m may be a radical reductase and scavenger in vivo. MATERIALS AND METHODS Proteins and Reagents-Wild type and mutated variants of ␣ 1 m were expressed in Escherichia coli. Using site-directed mutagenesis, a Cys 3 Ser substitution was introduced at amino acid position 34 to give the C34S-␣ 1 m mutant (36). In the recombinant ␣ 1 m forms, the N terminus was elongated by a 15-amino acid peptide containing eight histidines (His tag) and an enterokinase cleavage site (DDDDKA). The His tag was removed by incubating 10 mg of ␣ 1 m with 400 units of enterokinase (Sigma) for 5 h at room temperature in 20 mM Tris-HCl, 0.5 M NaCl, pH 8.0. His tag-free ␣ 1 m was then separated from enterokinase by gel chromatography, and the N-terminal amino acid sequence was determined. Human ␣ 1 m was prepared from plasma (37), urine (38), and baculovirus-infected insect cells (39) as described. Human plasma, urine, saliva, and tear fluid were obtained from healthy volunteers. All other proteins and reagents were of analytical grade and were purchased from Sigma if not indicated otherwise. Alkylation of ␣ 1 m-Thiol groups were alkylated by incubating ␣ 1 m (0.18 mM) with 22 mM iodoacetamide (IAA) in phosphate-buffered saline (PBS; 10 mM sodium phosphate, pH 7.4, 120 mM NaCl, 3 mM KCl) for 1 h at room temperature in the dark and then dialyzing exhaustively against 25 mM Tris-HCl ϩ 50 mM NaCl, pH 8. Reduced thiol groups were quantified by alkylation with iodo-[ 14 Reduction of ABTS Radical-A stock solution of ABTS radical was prepared following the procedure of Re et al. (40) with minor modifications. Potassium persulfate was added to a 7 mM ABTS solution in water, to a final concentration of 2.8 mM, allowing at least 5 h for the reaction. The solution was kept in the dark and used within 24 h. In some experiments, the amount of potassium persulfate was varied, producing different ABTS radical/ABTS ratios. The stock solution was diluted 125 times with PBS. ␣ 1 m, control proteins, and other reagents were added as described for each experiment, and the reaction was followed by monitoring absorbance changes of ABTS and its radical. End point scanning of the reaction products was done after reducing remaining ABTS radicals by adding NaN 3 to a final concentration of 60 mM as described (41). pH studies were done by diluting the stock solution with 20 mM sodium acetate, pH 5.0, or 20 mM sodium phosphate, pH 6.0, 7.0, or 8.0, or 20 mM glycine-OH, pH 9.0. Estimation of Kinetic Parameters-Initial reaction rates were estimated by linear regression analysis of absorbance values obtained during the first 1 min of the reaction. The total formation of products (i.e. the reduced form of ABTS and the purple ␣ 1 m modification) and the total consumption of ABTS radical during the initial, rapid reaction phase were determined by linear regression analysis, as illustrated in Fig. 3. K m and V max values were determined by nonlinear regression of initial reaction rates using different initial ␣ 1 m (0.5-4 M) and ABTS radical (7-55 M) concentrations and using the same ratio between the initial ABTS radical and ABTS concentrations. Spectrophotometric Determinations-Spectrophotometric analyses were done either in a Beckman 7500 photodiode array spectrophotometer or a Beckman DU 640i spectrophotometer. The reaction between ␣ 1 m and ABTS radical was followed by scanning a 0.5-ml reaction mixture, blanking with PBS or the appropriate dilution buffer. Reading at time 0 was done before the addition of ␣ 1 m or control proteins, and at regular time intervals after the addition of the proteins. Concentrations of ABTS were determined by using ⑀ 340 ϭ 4.8 ϫ 10 4 M Ϫ1 cm Ϫ1 and of the ABTS radical using ⑀ 415 ϭ 3.6 ϫ 10 4 M Ϫ1 cm Ϫ1 (42). Concentrations of ␣ 1 m were determined by using the extinction coefficients at 280 nm reported for urine, plasma, and baculovirus-infected insect cell ␣ 1 m (39) and 3.6 ϫ 10 4 M Ϫ1 cm Ϫ1 for recombinant ␣ 1 m. The absorbance values at 550 nm at different time points (A 550 ) were calculated after correction for spillover absorbance of the ABTS radical using the following formula: A 550 ϭ 0.5 ϫ (A 550 (observed) Ϫ 0.403 ϫ A 735 ) ϩ 0.5 ϫ (A 550 (observed) Ϫ 0.163 ϫ A 415 ). The coefficients in this equation were determined by absorbance scanning of the ABTS radical at known concentrations. Purification of Purple ␣ 1 m-The purple end product of ␣ 1 m was purified by gel filtration. ␣ 1 m (80 -140 M) was incubated for 5 min with the ABTS radical/ABTS stock solution (1.2-1.8 mM) in PBS. After centrifugation at 8000 ϫ g for 2 min, the reaction product was applied to a 1-ml column packed with Sephadex G-25 Fine and equilibrated with 2 mM NH 4 HCO 3 , pH 8.5. The column was eluted with the equilibration buffer at free flow, and 0.2-ml fractions were collected manually. The eluted fractions were analyzed by absorbance scanning. Protein-containing fractions were pooled. SDS-PAGE, Blotting, and N-terminal Sequence Analysis-SDS-PAGE was performed using 12% gels in the buffer system described by Laemmli (43), with or without 2% (v/v) ␤-mercaptoethanol in the sample buffers. High molecular mass standards (Rainbow markers; Amersham Biosciences) were used. N-terminal amino acid sequence analysis was achieved by Edman degradation (Protein Analysis Center, KI, Stockholm, Sweden) of bands separated by SDS-PAGE and transfer to polyvinylidene difluoride membranes (Immobilon-P, Millipore, Bedford, MA) as described (44). Reaction of ABTS Radical with Tyrosine-ABTS radicals were generated by using hydrogen peroxide (50 M) and lactoperoxidase (5 g/ml) to oxidize ABTS (100 M) in 50 mM phosphate buffer, pH 7.4. When the formation of ABTS radicals reached a maximum (approximately 35 M), catalase (10 g/ml) was added to scavenge residual hydrogen peroxide. Tyrosine (10 -30 M) was then reacted with the ABTS radicals. It caused a stoichiometric loss in ABTS radicals within 5 min and promoted the formation of a product that had an absorbance maximum from 500 to 550 nm. This final reaction mixture was analyzed by LC/MS as described below. Reaction of ␣ 1 m with Glycyl-Tyrosyl Radicals-Radicals of the Gly-Tyr peptide were generated using lactoperoxidase and hydrogen peroxide. Reactions were carried out in 10 mM phosphate buffer, pH 7.8, containing 140 mM NaCl, 20 M Gly-Tyr, 25 g/ml lactoperoxidase, and 10 M diethylenetriaminepentaacetic acid with or without ␣ 1 m, orosomucoid, or HSA. They were started by adding 5 M hydrogen peroxide. After 30 min at 20°C, the fluorescence due to formation of dityrosine was measured ( ex , 325 nm; em , 405 nm) in a Jasco J-810 Spectrofluorimeter (Jasco Scandinavia AB, Mölndal, Sweden). Preparation of Protease Fragments of ␣ 1 m after Reaction with ABTS Radicals-ABTS and ␣ 1 m were reacted and desalted as described previously to generate the purple product. After boiling for 2 min, Pronase (Protease XIV from Streptomyces griseus) at a 1:25 enzyme/substrate ratio was added to the ABTS-␣ 1 m and incubated at 37°C for 2 h. Alternatively, trypsin was added at a ratio of 1:10 and incubated at 37°C for 3 h. The digests were then subjected to HPLC separation on a Phenomenex Luna 5 C18 column (250 ϫ 4.6 mm) using the following stepwise gradient. From 0 to 5 min, the eluent was 100% solvent A (10% methanol in 50 mM phosphate buffer, pH 6.5, containing 2.5 mM n-octylamine); the organic phase was then increased to 50% over the following 5 min and kept at 50% between 10 and 30 min. The flow rate was 0.8 ml/min. Peaks that absorbed at 550 nm were well resolved from ABTS and ABTS radical. They were collected for subsequent analysis by LC/MS. Liquid Chromatography-Mass Spectrometry Analysis of ␣ 1 m Digests-Tryptic peptides were analyzed by LC/MS by selecting and fragmenting major ions that eluted in the chromatogram. Peptides that contained ABTS were identified by the presence of an absorbance maximum around 550 nm and characteristic molecular fragments of ABTS in the MS/MS spectrum. The purple digestion products of the ABTS-␣ 1 m were separated on a Jupiter Proteo column (particle size 4 m, 150 ϫ 2.0 mm) (Phenomenex, Torrance, CA), using a Surveyor HPLC pump (Thermo Corp., San Jose, CA). The column was maintained at 30°C. The products were eluted at a flow rate of 0.2 ml/min using a linear gradient of two solvents: solvent A (0.1% formic acid) and solvent B (0.1% formic acid, 90% acetonitrile). The gradient was as follows: 0 -30 min, increased solvent B to 50%; 30 -32 min, increased solvent B to 100%; 32-34 min, maintained solvent B at 100%; 34 -35 min, decreased solvent B to 0%. The injection volume was 20 l. The HPLC was coupled to an ion trap mass spectrometer (ThermoFinnigan LCQ Deca XP Plus; Thermo Corp.) equipped with an electrospray ionization source. The mass spectrometer was operated with positive ionization using full scan mode (scan range 100 -2000 m/z). Spray voltage was set at 3.5 kV, the capillary temperature was set at 275°C, and the sheath gas flow was set at 35 units (instrument units). For MS/MS experiments, parent ions were fragmented in the ion trap using 35% collision-induced energy. RESULTS Reaction between ␣ 1 m and ABTS Radical-The ABTS radical was readily reduced by ␣ 1 m (Fig. 1). Overlaid scans of the solution show a decrease of the ABTS radical-specific 415 and 735 nm peaks and a concomitant increase of the ABTS-specific peak at 340 nm. An almost complete reduction was seen after 20 min by 3.5 M ␣ 1 m (Fig. 1A). Remaining ABTS radicals were reduced by adding 60 mM NaN 3 , revealing two end products, reduced ABTS, represented by the 340 nm peak, and a novel peak at 550 nm (Fig. 1A, inset), which gave the solution a purple color. Two separate phases of the reaction could be distinguished (Fig. 1B), an initial faster phase over the first 5 min and a second slower phase that was still ongoing after 2 h. A clear difference in the rate of the first phase was seen between ␣ 1 m and HSA (Fig. 1B) as well as the control proteins ovalbumin, orosomucoid, and soybean trypsin inhibitor (not shown). In contrast, the rate of the second phase was similar between ␣ 1 m and the control proteins. Thus, the second phase was regarded as nonspecific and is not discussed further. After 5 min, ϳ8 -9 ABTS radical molecules had been consumed per molecule of ␣ 1 m. Trolox, a water-soluble analogue of vitamin E, reduced ABTS radical stoichiometrically at a 1:1 molar ratio (not shown). To study the reaction products, 1 mg of ␣ 1 m was allowed to react with a ϳ10 -12-fold excess of ABTS radicals for 5 min, applied to a Sephadex G-25 column, and eluted, and fractions were collected (Fig. 2, A and B). The purple product (i.e. the absorbance at 550 nm) co-eluted with the ␣ 1 m protein (absorbance at 280 nm), whereas remaining ABTS radicals and ABTS were eluted later. The pH dependence of the reactions was investigated between pH 5 and 9 using standard conditions (Fig. 2C). A slow rate of ABTS radical consumption and formation of reduced ABTS was seen at pH 5, the rates increased between pH 5 and 8, and no further increase was seen at pH 9. This suggests possible involvement of cysteine, tyrosine, or histidine residues on ␣ 1 m, side chains on which deprotonization is likely to occur between pH 5 and 8. A much more striking pH dependence was seen for the formation of the purple ABTS-␣ 1 m. The absence of reaction below pH 7 and a sharp increase in the rate between pH 7 and 8 suggest involvement of side groups with a pK a around 7.5. The His tag of the recombinant ␣ 1 m did not influence the rates of ABTS radical consumption or formation of ABTS-␣ 1 m (not shown). This was supported by experiments using plasma and urine ␣ 1 m, which lack the His tag (see below). The addition of 0.25-1 M superoxide dismutase did not slow down the consumption of the ABTS radical or the formation of ABTS-␣ 1 m (not shown), suggesting that superoxide radicals were not involved. The rate of ABTS radical loss during the first 60 s of the reaction was estimated in various body fluids ( Table 1). The decay of the radical in the absence of ␣ 1 m was substantial in most fluids, but at higher dilutions, this could be subtracted from the loss induced by ␣ 1 m. As shown in Table 1, similar rates are seen in plasma, urine, saliva, and tear fluid, suggesting that similar reactions may occur in vivo. Furthermore, similar rates are seen at various dilutions of saliva and tear fluid. Kinetics and Stoichiometry-The formation of ABTS-␣ 1 m coincided in time with the consumption of ABTS radicals and formation of reduced ABTS (Fig. 3A). This suggests that the three reactions are linked. Fig. 3B illustrates how the rate of the consumption of the ABTS radical and the total consumption of ABTS radical were calculated. The same method was employed to calculate the corresponding parameters for the formation of reduced ABTS and the purple ABTS-␣ 1 m. The reaction rates for ABTS radical consumption, ABTS formation, and production of ABTS-␣ 1 m (A 550 ), calculated as described in the legend to Fig. 3B, were determined at different initial concentrations of the ABTS radical and ABTS. The rate of formation of ABTS-␣ 1 m (A 550 ) was independent of the initial ABTS radical and ABTS concentrations (not shown). However, both the rate of ABTS radical consumption and rate of ABTS formation increased to a maximum with increasing initial concentration of ABTS radical (Fig. 4A). The plots show that the rate of consumption of ABTS radical exceeded the rate of formation of reduced ABTS. The V max and K m values for the loss of ABTS radicals were calculated by nonlinear regression to be 0.68 Ϯ 0.06 M s Ϫ1 and 27.2 Ϯ 8.1 M, respectively. These gave a first order rate constant (V max /[␣ 1 m]) for reaction of ABTS radical with ␣ 1 m of 0.17 Ϯ 0.02 s Ϫ1 and an apparent second order rate constant (V max /K m or k app ) for reduction of ABTS radicals of 6.3 Ϯ 2.5 ϫ 10 3 M Ϫ1 s Ϫ1 . The total ABTS radical consumption and total ABTS formation were plotted as functions of the initial ABTS radical concentration (Fig. 4B). This demonstrated that 8 -9 ABTS radical molecules were consumed per ␣ 1 m molecule, and six ABTS molecules were formed during the first phase of the reaction. This also suggests that up to 2-3 molecules of ABTS were bound to each molecule of ␣ 1 m (i.e. the amount of ABTS radicals not converted to reduced ABTS). Reactions with Various ␣ 1 m Forms and Mutants- The reactions of recombinant ␣ 1 m were compared with those of ␣ 1 m purified from human plasma and urine and recombinant ␣ 1 m from baculovirus-infected insect cells. No significant differences in any of the reaction rates were found between the ␣ 1 m forms (not shown). The influence of the Cys-34 thiol group was studied using the mutated ␣ 1 m variant C34S-␣ 1 m and alkylated ␣ 1 m (IAA-␣ 1 m) (Fig. 5). A significant, but incomplete, decrease of the reduction rate of ABTS radical was seen between the wild type protein and the two thiol group-modified variants. The formation of the purple ABTS-␣ 1 m was decreased to background levels using C34S-and IAA-␣ 1 m. This suggests that the Cys-34 thiol group is essential for the binding of ABTS to the protein and is also involved in the reduction of the ABTS radical. Molecular Characterization of ABTS-␣ 1 m-The purified and desalted ABTS-␣ 1 m was allowed to react with a solution of ABTS radical (Fig. 6A). The ABTS-␣ 1 m reduced the ABTS radical at a decreased rate compared with ␣ 1 m. No more purple color was formed. This suggests that the binding of ABTS on the ␣ 1 m molecule is saturable (i.e. only a limited number of positions on ␣ 1 m can be modified) and that the sites for the two reactions are partially linked. SDS-PAGE shows two major molecular forms of ABTS-␣ 1 m, 28 and 24 kDa, and the presence of small amounts of a higher molecular weight band (Fig. 6B). A large aggregate was seen in both unreacted wild type ␣ 1 m and ABTS-␣ 1 m. The same pat- ). B, the absorption spectrum of peak 1. C, the mass spectrum of peak 1, indicating the major ion at 451 m/z. D, the MS/MS spectrum of the 451 m/z ion in peak 1 was obtained using 35% collision energy for fragmentation. E, the MS/MS spectrum of ABTS radical using 35% collision energy. tern was obtained without reducing agents in the gel (not shown). The N-terminal sequence of the 24-kDa band was AGPVPT, corresponding to the native protein without the N-terminal His 8 tag and enterokinase cleavage site, but with an extra N-terminal alanine. Alkylation of the Cys-34 residue with radiolabeled iodoacetamide showed incorporation to wild type ␣ 1 m but not to ABTS-␣ 1 m (Fig. 6B). As expected, C34S-␣ 1 m was negative. This demonstrates that the thiol group of ABTS-␣ 1 m was completely modified. The purple color of ABTS-␣ 1 m could be reduced by a large excess of dithiothreitol, and no spectral evidence of any reduced ABTS or ABTS radical was obtained (not shown). This suggests that the purple product is an oxidized form of ABTS and is covalently linked to the protein moiety. Identification of Purple Modifications on ABTS-␣ 1 m-Pronase digestion of ABTS-␣ 1 m and subsequent purification by HPLC gave two peaks with absorbance at approximately 550 nm. The major peak (Fig. 7, A and B) had a dominant ion of 451 m/z (Fig. 7C). Its MS/MS spectrum (Fig. 7D) had three molecular fragments in common with those of the mass spectrum of the ABTS radical (Fig. 7E). The structure of ABTS and these three fragments are shown in Fig. 9, A and B. The results of this experiment confirm that the purple product contained ABTS or a part of the ABTS molecule. It has been shown that phenols react with ABTS radicals to form purple compounds with broad absorbance around 550 nm (40,44). Therefore, we reacted tyrosine with ABTS radical and analyzed the resulting reaction mixture by LC/MS (Fig. 8). It contained a species with a mass of 451 m/z that had an absorbance maximum at 555 nm (Fig. 8, A and B). The mass spectrum of this compound contained three molecular fragments (Fig. 8C) in common with the ABTS radical (Fig. 7E) and the purple product obtained from ABTS-␣ 1 m (Fig. 7D). These results indicate that ABTS radicals react with tyrosyl residues on ␣ 1 m to form the purple product that has a mass of 451 m/z. Based on these results and the MS/MS spectra, a structure for the ABTS-Tyr adduct is shown in Fig. 9C. Several tryptic peptides were identified that absorbed at 550 nm. Two of these could be matched to expected tryptic peptides of unmodified ␣ 1 m. These two peptides were present in the first fraction from the initial HPLC purification, and they eluted at 13.5 and 14.2 min after subsequent separation by LC/MS (Fig. 10, A and B). They had major ions with m/z ratios of 375.2 (Fig. 10C) and 389.1 mass units (Fig. 10D), respectively. The mass spectra also contained ions with m/z ratios of 749.1 (Fig. 10C) and 777.1 (Fig. 10D) mass units, respectively. These respective ions had the correct m/z ratios to indicate that the major ions were a doubly charged species. Thus, the masses of the ABTS-containing peptides were 749.1 and 777.1 mass units. We calculated masses for the unmodified peptides that form ABTS adducts by subtracting 269 mass units from the singly charged peptides. The value of 269 m/z was obtained from the ABTS part of the ABTS-Tyr adduct (271 m/z) minus two mass units that account for the oxidized tyrosine residue. The calculated masses for the unmodified peptide were 480.1 and 508.1 mass units, respectively. These masses correspond to the predicted tryptic peptides IYGK (480.3) and LYGR (508.3), corresponding to tyrosine residues Tyr-22 and Tyr-132, respectively. Confirmatory evidence that supports these assignments was the presence of ions with m/z ratios of 478.1 (Fig. 10E) and 506.3 ( Fig. 10F) in the respective MS/MS spectra of the doubly charged species. These fragments would arise from the loss of the ABTS portion of the modified peptides, which would give ions 2 mass units less than the unmodified peptides due to oxidation of the tyrosine residues. The presence of ions with m/z ratios of 244.1 and 259.1 mass units confirms that a portion of ABTS was present in the peptides (cf. Fig. 7). From this finding, it is apparent that ABTS reacts with at least two tyrosine residues in ␣ 1 m to form covalent adducts. Reaction of ␣ 1 m with Glycyl-Tyrosyl Radicals-We determined whether ␣ 1 m can react with other physiologically relevant free radicals. Radicals of the dipeptide Gly-Tyr were generated using lactoperoxidase and hydrogen peroxide (45). Upon adding hydrogen peroxide to the dipeptide and lactoperoxidase, fluorescence associated with dityrosine-like products was produced and was prevented by ␣ 1 m in a concentration-dependent manner (Fig. 11). Neither human serum albumin nor orosomucoid inhibited the fluorescence. Thus, we conclude that ␣ 1 m reacts with a transient oxidant formed during oxidation of Gly-Tyr and that this reaction is not a general activity of proteins. DISCUSSION In this investigation, we have demonstrated that the lipocalin ␣ 1 m rapidly reacts with the ABTS radical by reduction and covalent binding of ABTS derivatives to tyrosine residues in its polypeptide chain. This activity is superstoichiometric, because one molecule of ␣ 1 m was capable of scavenging 8 -9 molecules of the ABTS radical. Furthermore, the rate of reduction of ABTS radical by ␣ 1 m displayed saturation kinetics, which indicates that the ABTS radical must bind to ␣ 1 m before it is reduced. Based on the results of the kinetic experiments, we propose that the following reactions are predominant, where ABTS . represents the ABTS radical, ABTS 2Ϫ represents reduced ABTS, and ABTS-␣ 1 m represents the purple product. The results demonstrated that 8 -9 ABTS radicals were consumed per ␣ 1 m molecule, but only 5-6 reduced ABTS molecules formed during the first phase of the reaction. This suggests that 2-3 molecules of ABTS were bound to each molecule of ␣ 1 m (i.e. the amount of ABTS radicals not converted to reduced ABTS). The purple ABTS conjugation products were localized to at least two different tyrosine residues, Tyr-22 and Tyr-132, supporting the possibility that several ABTS residues can be covalently linked to the same ␣ 1 m molecule. It is also possible that ABTS conjugation products may be linked to additional locations besides Tyr-22 and Tyr-132, because not all of the purple Pronase and trypsin ABTS-␣ 1 m digestion products could be identified. Based on the products formed in the analogous reactions of ABTS radical with p-hydroxybenzoic acid (41) and the plant flavonoid naringin (46), we propose that a tyrosyl radical on ␣ 1 m reacts with the ABTS radical via the reaction shown in Fig. 12B. The product formed in this reaction has the required molecular mass of 451 m/z for the product identified when the purple ␣ 1 m protein was digested with Pronase. We propose a tentative reaction scenario for radical scavenging by ␣ 1 m (see Fig. 12A) based on the above reactions and the recent finding that ␣ 1 m has catalytic reductase properties, involving the unpaired thiol group of Cys-34 in the reactive center (8). However, our experiments with the alkylation of the Cys-34 thiol plus its mutation to a serine residue demonstrated that Cys-34 was not solely responsible for reduction of ABTS radicals. This result invokes two possible explanations for the reductant activity of ␣ 1 m. Either an unidentified residue reduces ABTS radicals and its reductant activity is optimized by Cys-34, or an unidentified residue and Cys-34 both reduce ABTS radicals. Reaction of the cysteine thiol is expected to be favorable, because cysteine reduces the ABTS radical with a second order rate constant of 1.9 ϫ 10 6 M Ϫ1 s Ϫ1 (47). The value of k app for reduction of ABTS radicals by ␣ 1 m that we obtained was 300-fold less than this rate constant. However, this is expected, because k app will be a function of the rate constants for the reversible reaction between ABTS radical and the thiol group plus that for binding of ABTS radical to ␣ 1 m. Once the incipient radicals are formed on the ␣ 1 m, they must become localized to Cys-34, which in turn oxidizes tyrosine residues to tyrosyl radicals. These tyrosyl radicals would then covalently couple with the ABTS radical to form the purple adduct (Fig. 12). This proposal is supported by our finding that the purple adduct was formed only when the Cys-34 thiol group was present in the protein. Further- The pK a of the Cys-34 thiolate is lowered by the proximity of the three positively charged side chains of Lys-92, Lys-118, and Lys-130, located near the ⍀-loop. 1, the C34 thiolate group reacts with ABTS radical, and a thiyl radical and reduced ABTS are formed. 2, the thiolate is regenerated by an intramolecular reaction with tyrosine residues, including Tyr-132 and -22, producing a tyrosyl radical. 3, subsequently, the tyrosyl radical reacts with the ABTS radical, forming a stable purple Tyr-ABTS adduct. B, proposed detailed reaction scheme of reaction 3 of A. Radical Reductase Activity of ␣ 1 -Microglobulin OCTOBER 26, 2007 • VOLUME 282 • NUMBER 43 more, cysteine thiyl radicals are capable of one-electron oxidation of tyrosine residues (48). The reaction of the thiyl radical with tyrosine residues need not be direct, because tyrosine residues can be the ultimate sink for oxidizing equivalents in proteins (49), which reflects the thermodynamic pecking order of free radicals (50). Reduction of the thiyl radicals by tyrosine residues is a repair reaction that enables superstoichiometric scavenging of ABTS radicals. Thus, we have compelling evidence that when radicals are formed on ␣ 1 m, they are transferred through the protein and localized to tyrosyl residues. Analogous radical exchange reactions between tyrosine and cysteine residues account for the catalytic activity of ribonucleotide reductases (51). According to models of the three-dimensional structure of ␣ 1 m, Cys-34 is located on a large -1 flexible loop (22). This would make it accessible to oxidants that bind to ␣ 1 m. Furthermore, it is likely that the thiol interacts with adjacent lysyl residues, because it was recently shown that the catalytic reductase properties of ␣ 1 m are dependent on Cys-34 as well as the lysyl residues, Lys-92, Lys-118, and K130A (8). The positively charged lysyl residues may form ionic interactions with the thiolate and consequently lower its pK a . This would facilitate the ability of the Cys-34 thiol to be oxidized and reduce compounds, such as ABTS radicals (Fig. 12). It has been known for more than 30 years that ␣ 1 m is modified by extremely heterogeneous yellow-brown chromophores. These have been studied extensively, and it was reported previously that the Cys-34 side chain (20) and several lysyl side chains (21) of ␣ 1 m isolated from urine or amniotic fluid (29) were modified. In the first report, the modifications could not be identified, and in the second report the sizes of some of the modifications were determined to be 112, 206, and 282 mass units. In the third report, they were structurally identified as derived from the tryptophan metabolite kynurenine. Furthermore, ␣ 1 m has been shown to react with hemoglobin and heme (8,27,52), and it was hypothesized that the chromophores are degradation products of protoporphyrin (27,36). In this paper, we have shown that in vitro reaction of ␣ 1 m with ABTS radical yields purple modifications on at least two tyrosyl residues, and these could be identified as fragments of ABTS. Thus, a picture emerges of the lipocalin reacting with various organic radicals by reduction and covalent adduction to several of its side chains. A potential physiological function of ␣ 1 m could be for it to act as a "radical sink" via its radical reductase and scavenging activities. ␣ 1 m is found in all extracellular fluids in levels similar to the plasma concentration (i.e. around 2 M) (53), which we have shown to display significant radical reduction and scavenging activity. In support of this proposal, we found that ␣ 1 m was able to prevent an increase in fluorescence of oxidation products of the dipeptide Gly-Tyr. This and related peptides are oxidized by peroxidases to radical species (45). Tyrosyl radicals and oxidation products of tyrosine, such as dopa, are known to promote oxidation in biological systems (54,55). The most plausible explanation for the action of ␣ 1 m is that it reduced either tyrosyl radicals or a related oxidant product when Gly-Tyr was oxidized by lactoperoxidase. We are currently investigating the mechanisms by which ␣ 1 m prevents the fluorescence changes associated with oxidation of Gly-Tyr, and the objects of future studies should be to identify as many as possible of its targets in normal and pathological conditions and to characterize the reaction mechanisms in detail.
2019-03-21T13:07:19.483Z
2007-10-26T00:00:00.000
{ "year": 2007, "sha1": "d1efadd65f2605c8832d9b1ea95560a8e4317186", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/282/43/31493.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "4b2d62596b1b3d75fb2c0a901031c414d1d62297", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
236398162
pes2o/s2orc
v3-fos-license
Gunshot Wounds Causing Distal Arterial Bullet Embolisms We report two cases involving small-caliber gunshot wounds to the chest with embolization of the bullet which complete occluding arterial circulation into the left lower extremity. A 30-years-old and 19-years-old men suffered gunshots wound to the thorax and abdomen with subsequent arterial embolisms into their left legs. Image studies revealed the left popliteal and femoral arteries occlusion by the missiles. Arteriotomies were auspiciously performed to retrieve the projectiles along with Fogarty catheters thrombectomies which conclude successful outcomes. At a 6 and 36 months’ follow-up, the patients were doing well without any vascular associated complications. Bullet embolization of the arterial or venous systems is a rare complication of penetrating gunshot injuries with diagnostic and therapeutic challenges. This complication’s suspicion should rise when there is a gunshot injury without an exit wound and with sudden pain or ischemia in an extremity. Individualized treatment should be urgently performed to avoid irreversible damage to the affected area. Introduction P enetrating aortic trauma remains as one of the most challenging injuries with a mortality rate as high as 87.5% for gunshot lesions [1]. Subsequent bullet embolism to a peripheral artery following by vascular trauma is remarkably uncommon [2]. Bullets have been reported to migrate within arterial and venous systems that result in serious lifethreatening injuries [3]. The former accounting for the majority of these events has been asymptomatic on 80% of the cases, and the latter occurring less frequently that causes symptoms only in 30% of the patients [4]. The bullets arterial embolization can cause misleading symptoms that could delay correct diagnosis and management [5]. Recognizing these events are crucial to avoid high rate morbidities for the vascular surgeons. Few cases have been reported in the literature of arterial peripheral missile emboli secondary to a firearm injury and the majority who involved had lower extremities [3,6]. We report two cases of arterial emboli in the left leg secondary to a migrated bullet completing occluding the popliteal and femoral artery which leads to acute critical ischemia. A review of the literature is also presented. Case 1 A 30-years-old man with a one-month history of multiple gunshot wounds to the chest and abdomen, arrived at the emergency department referring left leg pain. In that previous episode, he underwent laparotomy and thoracotomy with primary closure of the small bowel, and lung parenchyma. The patient was hemodynamically stable with normal vital signs. On physical examination, the left lower limb was cold with the absence of infrapatellar pulses and the femoral pulse was normal. The rest of the examination findings were normal. Laboratories were within normal parameters. A computerized tomography (CT) scan of thorax and abdomen was showed a 9 mm × 9 mm × 6 mm aortic pseudoaneurysm of 3 cm above the celiac trunk at the level of the 11 th thoracic vertebrae with two metal shards near this site ( Figure 1). A doppler ultrasound was made of the left leg reporting thrombosis of the popliteal artery. The patient was transferred to the operating room where arteriography was performed through the ipsilateral femoral artery and showed a missile impacted in the popliteal artery ( Figure 2). An infrapatellar median incision was made. Following vascular control of the popliteal artery and the tibioperoneal trunk, a transverse arteriotomy was done just above the latter where the missile was encountered and consequently removed; furthermore, a thrombectomy was performed by using a fogarty catheter ( Figure 3). Afterwards, the patient underwent an endovascular pseudoaneurysm repair of the aorta deploying a 20 mm x 40 mm BeGraft stent (Bentley, Hechingen, Germany) on the aorta completely covering the pseudoaneurysm ( Figure 4). The patient was discharged on postoperative day two without complications. On a one month follow up, the patient was asymptomatic with normal distal pulses. Case 2 A 19-years-old male patient arrived at the emergency room with no past medical history, two gunshots in the right hemithorax at the 3 rd and 7 th intercostal space levels and one exit wound in the abdomen. He was hypotensive and tachycardic. He was transferred to the operative room where a thoracotomy was performed to find a lung injury, hemothorax, and hemopericardium without cardiac injury, a peritoneal window, and placement of a chest tube was carried out. A laparotomy was done to finding two liters of blood secondary to a grade II liver and diaphragm injuries, which were resolved by primary repair. He was transferred to the intensive care unit with vasopressors and mechanical ventilation. He was reassigned to our military hospital for surgical intensive care unit management. Two days later, he presented with acute ischemia in the left lower limb. Abdominal x-ray revealed a projectile in the left inguinal region ( Figure 5). A doppler ultrasound was ordered of the affected leg, showing a projectile missile in the left common femoral artery together with a hypoechoic thrombus in the superficial femoral artery recanalizing in the popliteal artery ( Figure 6). A chest and abdomen angio-tomography ruled out an aortic injury. The patient was transferred to the operating room where an arteriotomy of the left common femoral artery was done by an inguinal approach, extracting the projectile, and performing a distal embolectomy with a Fogarty catheter. A common femoral artery arteriorrhaphy was finally done with a saphenous patch (Figure 7). In the postoperative period, he evolved satisfactorily with adequate pulses and recovered from his chest and abdominal injuries. He was discharged on postoperative day 5 and walked with proper oral intake. On a 36-month-follow up, he is asymptomatic with normal distal pulses. Discussion Our two patients were presented with a missile embolus one month and two days after their gunshot injuries in the popliteal artery and the common femoral artery MAKING these cases extremely uncommon. Most of the cases reported in the literature, found the migrated bullet causing an arterial embolism on the same operative period as the initial injury or a few days after the injury; nevertheless, significantly delayed embolization has been encountered in few cases, such as one of our cases [7]. Penetrating cardiac projectiles are usually fatal; nonetheless, in some cases, a bullet can lose its kinetic energy and remain inside a cardiac cavity or the lumen of the aorta causing a myocardial or aortic disruption and sealing itself with a flap or a localized hematoma [5,7]. This low energy injury may have enough energy to penetrate, but not to transfix the vessel, consequently traveling through the blood flow until occluding a peripheral artery in a distant site from the initial perforation [6]. Therefore, the bullet diameter must be less than the blood vessel' width which is penetrated. This explained why bullet fragments or pellets are also more prone to embolize [8]. In our patients' pellets, they were found on the arterial circulation of the left leg. We believed that the pallets entered through the thoracic aorta and they traveled down into the left lower limb because of its small size. The mortality rate of peripheral arterial embolisms may vary depending on the entrance site of the bullet into the systemic circulation being 21% when the projectile enters through the cardiac cavity and 47% when it enters the thoracic aorta and increase to 70% when it penetrates through the abdominal aorta [4]. Bullets can cause an embolism in the lower limbs' arteries, especially when the projectile enters through the descending or abdominal aorta due to anatomy and position of the patient at the time of the trauma, as well as respiratory and muscular movements made by the patient [8]. Shannon et al., [9] reviewed 30 patients with arterial peripheral missiles embolization being involved the lower extremities in 23 cases (76.7%), with the left leg accounting for 61% and the right leg for 39% of the cases. In most cases reported of bullet emboli to the lower extremities, the embolism site is in 50% of the cases to the popliteal artery [2,9]. Embolization to the left lower extremity is more common than the right due to a more acute angle that the right common iliac artery makes concerning the aortic bifurcation [10]. The symptoms that the embolus might produce depends on the artery involved, the percentage of occlusion, and the amount of collateral circulation. Signs and symptoms of emboli in lower extremities include ischemia, limb weakness, decreased sensation, paresthesia, and diminished peripheral pulse [2]. Our patients developed acute ischemic symptoms when the projectile occluded 100% of the arteries; consequently, occluding the complete blood flow to their lower extremities. Confirmative image study is necessary for asymptomatic patients when there is clinical suspicion, such as computed tomography angiography, radiography, or doppler ultrasound [11]. There is no consensus on the ideal treatment for bullet emboli removal. An endovascular approach can be considered when encountering mobile projectiles making a percutaneous removal by using basket or snare type catheters [3]. On the contrary, in cases where there is impaction of the bullet or the thrombus involve presence lead to an urgent scenario with a amputation high risk, open surgery must be done to retrieve the bullet, clear the thrombus and perform a transverse arteriotomy [2,11]. The latter was done on our patients because they showed signs of acute ischemia, therefore, decision was made to explore the left popliteal fossa and the left thigh. Usually, balloon embolectomy with catheter extraction is contraindicated in these cases because of possible injury to the intimal lining as the foreign bodies are removed retrograde through the arterial lumen [9]. Conclusion Two cases have been described in which missiles projectiles arterial emboli were encountered in the left lower limb after thoracic gunshots wounds. If the energy of a gunshot injury to the chest or abdomen diminishes, the surrounding muscles will prevent exsanguination. Consequently, the projectile itself may act as an embolus and travel through the body vessels, predominantly to the lower extremities. The suspicion for this complication should rise in all cases when there is a gunshot injury to the chest or abdomen without an exit wound and with sudden pain or ischemia in an extremity. Main Novel Aspects • The bullets arterial embolization can cause misleading symptoms that could delay correct diagnosis and management. Recognizing these events are crucial to avoid high rate morbidities. • Bullets can cause an embolism in the lower limbs' arteries, especially when the projectile enters through the descending or abdominal aorta due to anatomy and position of the patient at the time of the trauma. • Arterial bullet embolization should rise suspicion when there is a gunshot injury without an exit wound, with sudden pain or ischemia in an extremity. Individualized treatment should be urgently performed to avoid irreversible damage to the affected area. Declaration Ethics approval and consent to participate: All procedures performed in studies involving human participants were in accordance with the ethical standards of Technological de Monterrey ethics committee and institutional review board and have therefore been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Consent for publication: Written informed consent was obtained from the patients for publication of these case reports and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
2021-07-27T00:04:38.496Z
2021-06-01T00:00:00.000
{ "year": 2022, "sha1": "d5c4250fc0a1caed045b8bfc9cf945a37f440e4b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "aa6a7a852f7cda549a7519436b769f70defcaa0a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248836383
pes2o/s2orc
v3-fos-license
Model-Based Computational Analysis on the Effectiveness of Enhanced Recovery after Surgery in the Operating Room with Nursing Objective In order to better understand the relative surgical process, this work used a model-based computational analysis on the effectiveness of enhanced recovery after surgery (ERAS) in the operating room with nursing. Methods A total of 360 surgical patients in the First Affiliated Hospital, Sun Yat-sen University, from the period June 2020 to March 2021, were randomly divided into two groups, namely, observation group and control group, with 180 cases in each group. Routine nursing was used in the control group, while ERAS was implemented in the observation group from the point of view of four aspects, namely, preoperative visit, intraoperative cooperation, postoperative return visit, and psychological intervention. Results Postoperative complications, average hospital stay, nursing satisfaction, and postoperative quality of life in the observation group were significantly better than those in the control group (all p < 0.05). Conclusion The application of ERAS for surgical patients can enhance team awareness, optimize the process of cooperation, reduce surgical complications and improve nursing quality, and prognosis, and it is worth popularizing in the operating room. INTRODUCTION Surgery refers to the treatment that doctors use with knives, scissors, needles, and other medical instruments to cut off and sew parts of the human body to maintain or even save the patient's health. This surgical treatment is commonly known as "operation". The purpose is to treat or diagnose diseases to improve the body's function and shape, such as removing diseased tissues (1,2), repairing injuries (3,4), and organ transplantation (5,6). Early surgery is limited to cutting and suturing on the body surface by simple manual methods such as abscess drainage, tumor resection, and trauma suturing. With the development of surgery, the field of surgery has been expanding, and today, it can be performed in any part of the human body (7)(8)(9)(10). In addition, it has been reported that surgery has greater efficacy than non-surgical treatments in curing some human diseases (11,12). However, various intraoperative complications and postoperative complications may occur due to injury, bleeding, or infection caused by surgical treatment (13)(14)(15). In addition, when patients undergo surgery, they have to experience the stimulation of anesthesia and surgical trauma. Their body will be in a state of stress, which will lead to both psychological and physiological burden (16). Therefore, some kind of good and effective perioperative nursing is required to provide patients with holistic physical and mental care so that they can successfully spend their perioperative period in the best frame of mind ( Figure 1). Such nursing also plays an extremely important role in preventing or reducing postoperative complications (17). The theory of Enhanced Recovery after Surgery (ERAS) was proposed systematically by Danish surgeon Professor Kehlet (18) for the first time in 1997, which refers to the adoption of a series of perioperative optimization measures with evidencebased medical evidence to block or reduce the stress response of the body. It can promote the accelerated recovery of patients after surgery and achieve the purpose of shortening the patient's hospitalization time so as to reduce postoperative complications and also the risk of readmission and death (19). It has been verified that ERAS has a very positive application (20,21). The purpose of this study is to research and analyze the effect of ERAS on perioperative nursing and provide a reference for further study. General Description A total of 360 (223 males and 137 females) surgical patients in the First Affiliated Hospital, Sun Yat-sen University, from the period June 2020 to March 2021, were selected as the research objects. All the selected patients underwent elective surgery, following which all of them could actively cooperate with perioperative nursing guidance. The whole study was carried out with the informed consent of these patients and approved by the hospital ethics committee. All patients were randomly divided into two groups, 180 in each group. Of these, 118 males and 62 females with age ranging from 61 to 78 years and an average of (62.50 ± 15.60) years were in the observation group, in which ERAS was implemented in the form of preoperative visit, intraoperative cooperation, postoperative return visit, and psychological intervention. A total of 105 males and 75 females with age ranging from 51 to 81 years and an average of (62.70 ± 14.60) years were in the control group, in which routine nursing was implemented. There was no significant difference between the two groups in the general data such as gender, age, and gastrointestinal diseases (all p < 0.05), which indicated that they were comparable in this study. Routine Nursing The control group was given routine nursing care. Preoperative nursing was carried out for the purpose of education. Patients were required to fast for 8-12 h and abstain from drinking for 4 h (7). After entering the operating room, the patients were checked, and venous access was established. After general anesthesia, the patients were placed in the operating position. They could eat after anal exhaust, the complications of which were observed and recorded. ERAS Pathway The observation group received routine nursing and the corresponding nursing intervention combined with ERAS, including preoperative nursing, operation room nursing, and postoperative nursing, which are described in the following paragraphs. Preoperative Nursing In the ERAS pathway, good preoperative preparation and psychological nursing play a key role in the smooth conduct of operation. Nurses should visit patients 1 day before operation and give them appropriate diet and psychological nursing. Psychological Nursing. Surgery is an invasive operation, which causes serious psychological burden to patients. Anxiety is a common psychological condition of patients before surgery. Psychological counseling should be done well before surgery to enhance the confidence of patients during surgery. Self-Care Ability. The self-care ability of the patients were evaluated according to the inputs provided by the patients in the self-care ability evaluation form. Self-care ability was divided into four levels, namely, no dependence, mild dependence, moderate dependence, and severe dependence. The self-care ability of these levels was evaluated as none needed for care, a few needed care, most needed care, and all needed care, respectively. Dynamic evaluation was made according to the changes in the patients' condition and nursing levels, and corresponding nursing measures such as secondary care, primary care, and special care were implemented. Diet Nursing. The nutritional status of the patients was evaluated. Patients without gastrointestinal motility disorder were required to fast solid food for 6 h and liquid food for 2 h before operation. They were required to take two bottles of "Suqian beverage" (a kind of maltose fructose drink made in China) of approximately 800 ml orally at 22:00 and one bottle of approximately 400 ml 2 h before operation. Reducing the hunger, thirst and anxiety of patients can lower the incidence of postoperative nausea and vomiting, which will accelerate their recovery. Operating Room Nursing The bladder of the patients should be confirmed empty while the nurse brings them into the operating room. An equilibrium liquid of approximately 30 drops/min was given to the patients after confirming the standby state of the indwelling needle and slowly dripping it for maintenance (22,23). The roving nurse and the workers jointly verified the general information of the patients and handed over their intraoperative medication, imaging data, special supplies, and medical records. After signing the printed operation handover form, the patients were sent to the operating room. The patients were under anesthesia during the operation. Excessive blood loss and fluid loss may be caused by long operation time and trauma. Therefore, it is highly important to implement operating room nursing intervention in the ERAS pathway. The infusion channel should be reasonable, and the appropriate venous catheter should be selected. In case of significant blood loss and fluid loss during the operation, the large-diameter venous channel and central venous catheters anti-infection catheter should be selected and the three-way pipe should be managed well. It is reported that the pollution rate of the three-way pipe during the operation can reach 23%. The integrated board was used to prevent infection. In addition, body position management should be standardized. The exposed field should be convenient for the operator to conduct the operation. The body should be placed gently and the functional position should be maintained after the body is placed. Personalized body position should be adopted to avoid skin damage and nerve damage. Physical preventive measures such as elastic socks and intermittent pressurizing devices can be used to avoid low blood volume. A specialist group should be set up, with a specialist nurse as the team leader. Daily staff should be arranged by the specialist group every day. Operational materials should be prepared well according to the doctor's instructions, and the staff should actively cooperate with the surgeons to shorten the operation time. Postoperative Nursing The patients went back to the ward after anesthesia. Evaluation and handover were made according to the observation record sheet of the anesthesia recovery room (PACU). The handover contents mainly include the following: identity confirmation, vital signs, consciousness, respiration, circulation, oxygen saturation, the patient's limb mobility, oral and lip color, infusion, urinary catheter, medication, drainage and wound dressing, and skin. Observation Indicators The incidence of postoperative complications, treatment effect, nursing satisfaction, and quality of life were compared between the two groups (22)(23)(24). According to the questionnaire of patient satisfaction in the operating room developed by our hospital, the patients scored on the spot to judge their nursing satisfaction during the postoperative return visit. Satisfaction rate = very satisfied + satisfied (the total number of people). Statistical Method SPSS 26.0 statistical software was used to analyze the data. The measurement data were expressed by average ( x + s) and the t-test was used. The counting data were expressed by percentage (%) and the X 2 test was used. The difference was statistically significant (p < 0.05). RESULTS AND DISCUSSIONS As shown in Table 1, complications such as skin injury, shiver, and incision infection occurred in both groups, which include 13 cases in the observation group (7.22%) and 35 cases in the control group (19.44%). The number of patients with complications in the observation group was significantly less than those in the control group, which indicated better nursing effect on ERAS (p < 0.05). One of the concepts of ERAS is to reduce the incidence of postoperative complications and promote the recovery of patients' physical and psychological health (25), which is consistent with the results in Table 1. Nursing staff made a comprehensive evaluation of the preoperative visits of the patients in the observation group before the operation. The infusion pipeline was well managed during the operation, and the operation position was correctly placed to prevent hypothermia. In addition, a series of nursing interventions to prevent deep vein thrombosis and control incision infection were adopted, which significantly reduced the complication rate of the patients. Generally, surgical patients experienced moderate and severe pain. Good postoperative analgesia can relieve their tension and anxiety. In the ERAS pathway, a return visit was made to correctly evaluate the patients' pain after the operation. It is beneficial for wound healing and will speed up recovery if analgesia is given in a preventive, timely, and multimode manner (26). ERAS has been shown to allow patients to move out of bed sooner (27,28) and reduce the length of stay in hospital (29,30). From Table 2, it can be seen that the patients in the observation group were significantly better than those in the control group in terms of exhaust time, free movement time out of bed, and average length of hospital stay (p < 0.05), which showed consistency with the previous report. As shown in Table 3, patients' satisfaction with nursing in the observation group (98.30%) was significantly higher than that in the control group (85.00%), and the difference was statistically significant (p < 0.05). Compared with patients who underwent routine nursing, the time of fasting food and drink of those who adopted ERAS was shortened. The patients' hunger, panic, and fear caused by long-term fasting were avoided. Effective communication with the patients was made before the operation. The patients could more clearly understand the purpose and time of fasting so that they could more actively cooperate during the perioperative period. Therefore, nursing satisfaction was improved (31). Quality of life was positively correlated with the score. The higher the total scores, the higher the quality of life. As shown in Table 4, the scores of quality of life after nursing in the observation group were significantly higher than those in the control group (p < 0.05) after psychological intervention. It was reported that ERAS can significantly improve patients' mental health and physical health, which was basically consistent with the conclusion in Table 4 of this study (32). Moreover, psychological intervention can improve the patient compliance following the ERAS after operation. CONCLUSIONS In this study, the effects of routine nursing and ERAS on perioperative nursing were compared. The results indicated that the ERAS pathway can not only reduce postoperative complications and shorten the length of hospital stay, but also improve patients' quality of life. From this study, we can see that for patients, the application of the ERAS theory during the perioperative period can shorten the operation time and reduce their postoperative complications so as to improve the prognosis and enhance their overall satisfaction with the quality of care. For surgeons, ERAS can enhance the awareness of the operation team and optimize the operation process of cooperation, which is worth popularizing. With the development of medical technology, minimally invasive surgery and precise medications have led to fewer contraindications for surgical treatment. Surgery, as the main method of invasive treatment, has a great impact on the status of patients' psychology and physiology. In order to alleviate patients' anxiety and fear before operation, improve nursing quality, and reduce postoperative complications, operating room nursing staff are required to keep pace with the times and garner new ideas to serve patients. However, due to a wide range of departments involved in ERAS, multiteam and multidisciplinary assistance are required. In our study, ERAS proved to be an effective way to help patients recover quickly and comprehensively, thus providing a good reference and theoretical basis for studying ERAS and changing traditional nursing concepts to devise more effective nursing measures. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS WL and SH conceptualized and designed the study and wrote the first draft of the manuscript. YX, GC, and JY were involved in data collection and analysis. YY contributed in terms of manuscript revision, reading, and project management. All authors contributed to the article and approved the submitted version. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical approval for this work was obtained from The Ethical Review Committee of The First Affiliated Hospital, Sun Yat-sen University. The patients/participants provided their written informed consent to participate in this study.
2022-05-18T13:25:09.672Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "d9173b0e1475164bc2ad48c6f8508a9e059d09a9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d9173b0e1475164bc2ad48c6f8508a9e059d09a9", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
2374010
pes2o/s2orc
v3-fos-license
Low temperature limits for root growth in alpine species are set by cell differentiation This paper explores the causes of plant growth cessation at critically low temperatures in arctic-alpine environments. We grew four alpine plant species in thermostated soil cylinders in the field in the Swiss Alps, monitored root growth and studied root tip anatomy. Roots stopped growing at temperatures between 0.8 and 1.4 {degree sign}C. Microscopic examinations of root tips revealed that rates of cell elongation and differentiation control length growth. Xylem lignification appears to be a co-limiting factor at growth-limiting low temperatures. Introduction In arctic-alpine environments, low temperatures constrain the growing season and thus biomass production of plants (Bliss 1956;Pollock and Eagles 1987;Körner 2003). Similar growth limitations by low temperatures were found for winter crops and plant species in grasslands (Gallagher 1979;Wingler and Hennessy 2016); therefore, it has been hypothesized that all cold-adapted plants underlie common growth constraints when temperature arrives at a critical threshold (Körner 2008). Above zero soil temperatures need to occur over at least 6 weeks for angiosperm survival (Körner 2011). Results of earlier works have indicated that tissue formation, irrespective of whether above-or below-ground, becomes very slow at or below 5 °C (Alvarez-Uria and Körner 2007;Körner 2008;Nagelmüller et al. 2016a) and was never observed at or below 0 °C, a temperature that still permits CO 2 uptake at ca. 30 % of photosynthetic capacity. Hence, at such low temperatures plant growth is not carbon limited (Pollock et al. 1988;Xiong et al. 1999;Körner 2003Körner , 2015. Similar low temperature thresholds were reported for leaf expansion as well as for root length increment (Körner and Woodward 1987;Schenker et al. 2014;Nagelmüller et al. 2016a), and radial growth of xylem (Rossi et al. 2007), suggesting that apical and lateral meristems exhibit similar temperature responses and face the same low temperature limitations at tissue and cell level. Leaves of cold-adapted Poaceae start expanding very slowly at close to 0 °C (Peacock 1975;Körner and Woodward 1987;Porter and Gawith 1999;Nagelmüller et al. 2016b). Although, absolute minimum temperature thresholds for growth do not explain the overall plant performance in cold climates, the analysis of tissue processes at such extreme thermal constraints provides insights into the underlying physiological and anatomical mechanisms that control life at the cold edge. The production of new plant tissue includes cell division, cell enlargement and cell differentiation into various operational cell types (in that sequence). From what is known to date, cell division in cold-adapted plants is not interrupted at close to 0 °C (Francis and Barlow 1988;Körner and Pelaez Menendez-Riedl 1989). Cell enlargement depends on balanced rates of turgor-driven cell wall expansion and secondary cell wall synthesis. In graminoids, water flux into the vacuole (a major driver of cell expansion) does not appear to be affected over a temperature range from 2 to 20 °C (Thomas et al. 1989;Pollock et al. 1990). Even in a chilling-sensitive cucumber, vacuoles exposed to 8 °C had no problem to absorb water (Lee et al. 2005), and Spinacia plants rapidly adopted root hydraulic pressure after root temperature was reduced from 20 to 5 °C (Fennell and Markhart 1998). Hence, the critical processes are mainly to be associated with the growing cell wall. In the expansion zone of shoots and roots, cells undergo a severalfold size enlargement, which cannot be achieved with the initial primary wall. Secondary wall formation must go hand in hand with size increment; so, cell enlargement cannot be separated from differentiation, the most resource demanding process (Pollock and Eagle 1987). As part of that differentiation xylem and phloem become established. Xylogenesis notably contributes to the final biomass because of the thick xylem cell walls and their lignification. A low temperature-driven slowing of cell differentiation must feedback on cell division in order to retain mechanical robustness of the resulting tissue (Körner 2003). In conifers near the treeline, xylogenesis was found to cease at temperatures below 4-5 °C (Rossi et al. 2007(Rossi et al. , 2008. A lower temperature threshold for xylogenesis (2.0 ± 0.6 °C) was recently reported in the alpine shrub Rhododendron aganniphum (Li et al. 2016). However, we are dealing with an asymptotic decline, causing the absolute limit to become a matter of precision and definition. We suspect that cell differentiation (including lignification) is the most likely cause of root growth cessation at very low temperatures which otherwise still enable photosynthesis and cell division. To explore these processes at tissue and cell level, we decided to use roots and root tips because roots grow in a thermally buffered environment, permitting to explore the effect of even minute temperature differences on meristematic activity at critically low (still positive) temperatures. Roots expanding from ambient soil surface temperatures towards critically cold conditions deeper in the soil allow identifying threshold temperatures and also to sample root tips developed under such cold conditions (Alvarez-Uria and Körner 2007;Schenker et al. 2014). We exposed four alpine plant species, Ranunculus glacialis, Rumex alpinus, Tussilago farfara, Poa alpina, to such conditions in the field. From prior research, employing cold glacier water runoff as cooling medium we delineated that the zero point for root growth is below 5 °C; however, a precise minimum temperature threshold could not been defined nor could the tissue level responses be assessed for the thermal limit of growth (Nagelmüller et al. 2016a). In the present study, we quantified anatomical/histological changes of cell expansion/differentiation in roots and root tips (root kinematics, Silk and Erickson 1979;Sharp et al. 2004) grown at precisely controlled temperatures below 3 °C in order to identify the absolute minimum temperature threshold for root growth and cell elongation and differentiation in alpine plants. We expected a continuous cell division but a delay in the rate of cell enlargement and cell differentiation, causing this zone of the root tip to lengthen relative to controls at 10 °C. We also anticipated a weaker lignification, hence, a longer stretch of poorly lignified tissue behind the root tip as it reaches its low temperature growth limit. Experimental setup The experiment was conducted at the ALPFOR research station, close to the Furka Pass, at 2440 m a.s.l. in the Swiss central Alps. Individuals of four alpine plant species, R. glacialis (Ranunculaceae), R. alpinus (Polygonaceae), T. farfara (Asteraceae) and the grass species P. alpina ssp. vivipara (Poaceae), were collected at a very early seasonal developmental stage. We selected plantlets with newly emerging root tips of <2 mm length on the day of sampling. Roots from the previous growing season were cut to 3 cm length for later distinction from newly developed roots. For each species, we planted 42 individuals in cylindrical containers so that the apical meristem was positioned at −1 cm soil depth, which also correspond to the position of the youngest newly emerging root tips. Half of the plants were planted in Plexiglas® cylinders (200 × 50 mm, 1 mm wall thickness, Evonik Industries, Essen, Germany), which allowed imagebased root elongation measurements. The other half was planted in correspondingly sized polypropylene tubes with 0.8 mm wall thickness ('p-Safe PP', 5-P KG, Sulz, Germany), appropriate for measuring final rooting depth and the final harvest of root tips as well as the total root biomass. Both types of cylinders had a watertight seal at the bottom. The lowest 2 cm of each cylinder were filled with quartz sand (grain size: 2 mm) for drainage water, separated from the growth substrate by a fibre mat (Fig. 1). The upper 18 cm of the tube length was filled with a substrate mixture of 80 % fine sandy glacier silt and 20 % potting compost (Capito Universalerde, Fenaco, Bern, Switzerland). To exclude any growth-limiting factor except temperature in such an artificial growth substrate, fertilizer was provided weekly (in total three times over the 29-day experimental period) by adding 10 mL full strength Hoagland's solution (1.6 g L −1 of Hoagland salts; Sigma-Aldrich, Munich, Germany). Plants were watered with 20 mL water every second day in case there was no precipitation. Excess water (also from rain) drained to the bottom, was removed with a hand pump, using a 3 mm tube that reached the cylinder bottom. Plants were exposed for 29 days during the main part of the growing season to have low and high substrate temperatures by immersing the cylinders into four double-walled 96 L stainless steel, thermostated water tanks (interior dimensions: 80 × 60 × 20 cm). Three of these water baths were set to 1 °C and one to 10 °C as a control, resulting in temperatures of ca. 1.5 and 10.3 °C of the circulating cooling water (see Results section). Thirty plant replicates per species were placed in each of the low temperature baths, and 12 plant replicates per species were placed in the 'warm' control bath. Cylinders were randomly arranged in the water baths. Each water bath was equipped with a thermostat system (CBN 28-30 and HTM 200, Heto-Holten, Allerød, Denmark) and a water-circulating system to ensure uniform temperature distribution in the water baths. To minimize vertical heat flow, water baths were covered by a 2 mm aluminium plate and with a 2 cm Styrodur™ isolation layer on top. These covers had 50 mm diameter holes into which 40 cylinders per bath were inserted and fastened by a rubber ring (Fig. 1). The rings also prevented light from leaking into the 'below-ground' compartment. The water bath systems were placed on the terrace of the ALPFOR station to expose above-ground plant organs to typical alpine climate conditions ( Fig. 2; see Supporting Information- Fig. S2). The soil temperature gradient was measured in two cylinders per water bath (named T-cylinders) each equipped with seven small temperature sensors (NTCresistors, 2 mm in diameter, 5 kΩ at 25 °C, ±0.2 K, Epcos, Munich, Germany) at depths from −10 to −140 mm in the cylinders (Fig. 1) and with 1 cm distance to the cylinder wall. Plants were absent in these T-cylinders. Water temperatures in each water bath were measured by another set of NTC-resistors close to the water-circulating system. Temperatures in T-cylinders were recorded every 10 min with a data logger (CR1000, Campbell Scientific, Logan, UT, USA) and two AM16/32B multiplexers (Campbell Scientific). Hourly temperature means were further used for all calculations. Due to short periods of equipment failure early in the experiment and not of noticeable impact on the root elogation rate (RER) or the harvest data, the water in the cold treatment bath 3 heated up to 19 °C for 14 h on treatment-day-3 and up to 9 °C for 7 h on treatmentday-9 [see Supporting Information- Fig. S1]. In the warm control treatment, water heated up to 21 °C for 12 h on treatment-day-1. The brief temperature deviations emerged before RER measurements started, and except for the 7 h on Day 9, occurred immediately after planting, when plants were still affected by the transplantation. Growth measurements RERs were obtained for a period of 9 days and started on treatment-day-14 when sufficient roots arrived at the transparent cylinder walls of the cold treatment. Digital images (25 pixel mm −1 ) of the Plexiglas® cylinders were taken in 12-h intervals (0700 in the morning and 1900 CET in the evening) by using a photo box (100 × 60 × 60 cm) equipped with a digital camera (Nikon D7000, Nikon, Tokyo, Japan) with a 35 mm lens (Nikon) and a flash (64AF1, Metz, Zirndorf, Germany) for illumination. To ensure consistent image frames, cylinders were placed at exactly the same distance and orientation using positioning guides. Collecting and repositioning cylinders and taking photos took <90 s. The position of root tips was tracked across the sequence of images by using the software 'ImageJ' (version 1.47v, Rasband 1997(version 1.47v, Rasband -2015 and the plug-in 'SmartRoot' (Lobet et al. 2011). We measured root length increment to calculate RERs (RER mm per 12 h) and the root tip position in the cylinders (soil depth). The root tip positions were related to the temperature profiles along the cylinders [see Supporting Information- Fig. S3]. All plant individuals were harvested after 29 treatment days, which represents the main part of the alpine growing season. Both types of cylinders were opened at the bottom. The substrate was carefully removed until the tip of the single, deepest root became visible (without stretching the root) and maximum rooting depth was measured with a ruler. These depth values of roots reaching the coldest soil layers (averaged from several single deepest roots of cylinders) were used to calculate the mean minimum temperature threshold for root growth for each species. Thereafter, plants were washed and photographed and separated into roots, leaves and stems including flowers. Leaves and roots were scanned with a transmitting light scanner (Epson Expression 1680, Epson, Meerbusch, Germany). Total root length, numbers of primary and lateral roots were calculated from scans, using the WinRHIZO software (Regent Instruments Inc., Quebec, Canada). The dry weight of roots, leaves and stems was determined after drying at 80 °C for at least 48 h. Specific root length (SRL) and specific leaf area (SLA) were calculated by dividing the total root length (m) and leaf area (cm 2 ) by the corresponding dry weights (g). For biomass allocation (functional growth analysis), mass fractions (leaf, stem and root mass fractions) were calculated by dividing the dry weights of the fractions by the total plant weight. Root anatomy Longitudinal thin sections (tangential) with 80 µm thickness of the 2 cm terminal part of the root including the tip were made through the central cylinder in order to assess (i) cell density per unit root area, (ii) cell length in the elongation zone and (iii) cell differentiation (lignification) as described below. We sampled root tips from several single deepest roots of different cylinder at the day of harvest and stored them in 75 % (v/v) ethanol. Root tips were cut in pieces of 5 mm length, embedded in 3 % (w/v) agarose gel and cut by a vibratome (VT1200, Leica Biosystems, Nussloch, Germany). To visualize lignification in the xylem, thin sections of 80 µm were stained following Brundrett et al. (1988) starting with 1 h in 0.1 % (w/v) berberine hemisulphate followed by 30 min in 0.5 % (w/v) aniline blue at room temperature. Although berberine is not considered a lignin-specific dye (Brundrett et al. 1988), it intensifies the fluorescent signal of lignified xylem cell walls in contrast to nonlignified walls and allowed us to quantify the lignification optically. In addition, counter-staining with aniline blue inhibited any other fluorescence signals in the root tissues. Stained sections were mounted on microscope slides in 50 % (v/v) glycerine with 0.1 % (w/v) of FeCl 3 as a preservative. Sections were viewed with a fluorescence microscope (Leica DM 2500, Leica Microsystems, Wetzlar, Germany) equipped with an UV-filter set (excitation filter BP 320-280 nm, chromatic beam splitter FT 400 (400 nm), emission filter LP 425 nm). For image analysis, series of overlapping images were taken along the root by a digital microscope camera (Leica DFC 300 FX, 3.2 pixels mm −1 ) with constant 100 ms exposure time. Single images were merged by eye with the program 'Illustrator CS5' (Adobe Systems Incorporated, San Jose, CA, USA) to display a longitudinal section of the entire 20 mm root tip. We (i) counted the number of cells in a 0.2 × 0.2 mm square which was positioned as close as possible to the root meristem's initials (root apex, beneath the root cap), determined (ii) the mean final cell length in roots, measured (iii) the distance from the root apex to the position at which final cell length was reached to define the length of the cell elongation zone and (iv) assessed the degree of lignification of the xylem along the root. For (iv), we selected four image snippets (0.6 mm in diameter) of the central cylinder every 4 mm along the root starting at the root apex. Pixels of image sections were examined for lignification over a defined strip of 10 pixel width (ca. 0.2 mm) over the length of the four snippets using MATLAB 8.2 (The Mathworks, Natick, MA, USA). Then, we extracted the L-channel values of the HSL colour space (H = hue, S = saturation and L = index of lightness, with values between 0 and 1, RGB red-green-blue colour model) of the selected pixels and averaged these values by the number of pixels. These light intensity indices were used as a quantitative proxy for lignification. In addition, we measured (v) the distance from the root apex to the first lignified (fluorescent) xylem element. Cell size and distance measurements were done using the program 'ImageJ' (see above). Data analysis and statistics We calculated the correlations between soil temperature and soil depth for each water bath and the measured RERs during 12 h (mm per 12 h) as well as the root tip positions at harvest using third-order polynomial regressions as the temperature decrease between −20 and −80 mm soil depth was non-linear. Root tip temperatures were then estimated using the polynomial functions, based on the root tip positions at the end of each 12-h interval. Hourly temperatures within each RER 12-h interval were used for the polynomial fit as well as hourly minimum and maximum temperatures during the 12-h interval were considered [see Supporting Information- Fig. S3]. Since root tips occurred over a narrow range of the profile only (with curve fitting outside that range not relevant for root tips), we also estimated root tip temperatures derived from two combined linear regressions between the sensor depths −20 to −40 mm and −40 to −60 mm, where most cold-treated roots grew. These temperatures differed from temperatures from the polynomial regressions on average by ±0.017 K (SD) with a maximum deviation of 0.02 K. Since this deviation is below the T-sensor accuracy, we are confident that the polynomial regressions reflect the root tip temperature in the root observation window with the needed precision. To analyse the relationship between RERs (mm per 12 h) and the root tip temperatures, linear models were applied. The minimum temperature for root growth was derived from the single deepest root within a cylinder reached at the day of harvest (after 29 full treatment days). Here, we used for the polynomial fit of all hourly temperatures after the last RER measurement until the root harvest. We also calculated a mean by averaging the deepest root tip positions of several cylinders per species and the corresponding temperatures at these positions. To test for differences between temperature treatments, we performed one-way ANOVAs for: RER, total root length, root dry weight, SRL, number of primary and secondary roots, below-ground biomass (BGB), leaf area, leaf mass, SLA, above-ground biomass (AGB), biomass fractions, the lengths of the root elongation zone and the distance from root tip where first lignified xylem was detected. Additionally, we performed a linear model to test for differences in light intensity of the fluorescent xylem between temperature treatments plus for the light intensity increase along the root length (nested design) for each species. For the analysis of the post-harvest data, we merged the data from the three cold water baths since no significant differences we found among water baths (n.s. for factor bath). Number of replicates for different traits often deviated from the number of replicates at the beginning of the experiment, since individuals varied in the performance of certain traits. The normal distribution of post-harvest data was tested visually (q-q plots, histograms) and as log-transformation did not yield different statistical outcomes, non-transformed data are here presented. All statistical analyses and diagrams were done with R Statistical Software (version 3.0.2; R Development Core Team 2014) and the package 'ggplot2' (Wickham 2009). Soil temperature The cooling water bath systems provided a hourly mean of water temperatures of 1.5 ± 0.4 °C (±SD) in bath 1, 1.5 ± 0.6 °C in bath 2, 1.4 ± 0.9 in bath 3 during the experiment, underpinning that the temperature regime among the three baths was identical. The mean temperature for all hourly intervals of the warm water bath was 10.6 ± 0.5 °C. The temperature in the cylinders declined with soil depth in both treatments. In the cold treatment, this temperature decrease was particularly pronounced between −10 and −80 mm (R 2 = 0.96, P <0.001, Fig. 3) and most cold-treated roots grew not deeper than −80 mm. Temperature decreased from a hourly mean of 8.5 ± 4.3 °C at the top (−10 mm, ±SD) to 1.5 ± 0.8 °C at −80 mm depth and the temperature further approached 1 °C below −80 mm soil depth. In the warm treatment, the temperature gradient was less steep, ranging from a hourly mean of 12.2 ± 3.9 °C at −10 mm to 10.6 ± 0.5 °C at −140 mm (R 2 = 0.72 ± 0.23, P < 0.001, Fig. 3). The diurnal temperature fluctuations in the upper 40 mm of the soil column were caused by fluctuating solar radiation in both, cold and warm treatments, especially, since the uppermost 30 mm of the cylinders were not immersed in the water but insulated by the Styrodur layer. At −60 mm soil depth, these fluctuations became minor and the temperature differences of 8-10 K between the cold and warm treatment were stable ( Fig. 3; see Supporting Information- Fig. S1). In the deepest layers where roots still grew, the hourly maximum temperatures never surpassed 5.2 °C at −60 mm, 2.2 °C at −80 mm and −110 mm soil depth in each of the three cold water baths. Root length increment and temperature RERs per 12-h interval in the cold treatment were linearly and positively correlated with the mean root tip temperatures (mean of 12 root tip temperatures during the corresponding interval) in all four species. Within each species, linear regressions between RER (mm per 12 h) at night and root tip temperatures were always closer than regressions with RER mm per 12 h day values (Fig. 4). For each mean root tip temperature, we also presented the coldest and warmest hour during the 12-h interval (grey line in Fig. 4). Especially during the night interval, the warmest hour did not affect the mean root tip temperature, indicating by the skewed position of the mean on the grey line. We were able to record very small RERs of <0.2 mm per 12 h of a few individual roots between 0.7 and 1.2 °C during overcast days and cooler night intervals at ca. −60 mm soil depth (Fig. 4). However, comparing the RERs at these low temperatures between the four species indicates that species may vary substantially in their capability to elongate their roots at these low temperatures. For instance, Poa roots elongate 0.6 mm per 12 h, whereas the RERs of the three forbs were lower (Table 1). As expected, RERs between 1.0 and 5 °C were always significantly lower than the RERs in the control (P < 0.001 for all species, Table 2). No correlation was found between RERs and the small variation in root tip temperature in the warm control (data not shown). Root length increments in the cold treatment became smaller with increasing soil depth (data not shown) but roots still continued elongating very slowly as they approached the low temperature RER threshold. At harvest the temperatures at the maximum rooting depth obtained, were in the range of 0.8 to 1.4 °C for all four species, taking the hourly maximum temperatures during the root-forming period into account (Table 1). In Poa, the single deepest root grew to 106 mm corresponding to a temperature of 1.0 °C with hourly minimum and maximum temperatures of 0.8 and 1.3 °C, whereas the deepest roots of the other species were found between 74 and 87 mm (Table 1). Averaging the deepest single roots across several cylinders supported the difference between the three forbs and the grass species. Rumex roots stopped at higher soil depth, thus slightly warmer temperature, corresponding to 2.4 ± 1.1 °C, then Ranunculus at 2.0 ± 0.9 °C, Tussilago at 1.9 ± 0.8 °C and Poa at 1.5 ± 0.3 °C (Table 1). The roots of the 10 °C control all reached the bottom of the cylinders (180 mm). Root, leaf and plant traits At harvest (after 29 full treatment days) total root length of cold-treated plants, including first-and second-order roots, reached only 3 % (Rumex), 9 % (Ranunculus), 13 % (Tussilago) and 10 % (Poa) of the length of roots of the control plants (Table 2). Final root dry weight was similarly affected. In cold soils, the root systems were not only reduced in size (Table 2), but roots were significantly lighter per unit length that means reduced SRL m g −1 in all species. Low temperature almost completely inhibited the development of lateral roots, compared to the high number of secondary roots in the warm treatment (Table 2). If any, lateral roots were found only close to the root base in the warmer uppermost centimetre of the substrate (data not shown). The root mass fraction (RMF) was significantly lower in the cold substrate in all species (−66 % across all species in comparison to the controls). Table 1. Root length responses near the low temperature limit. RERs during the 12-h interval with the lowest temperature in the cold treatment, the deepest single root tip position and the mean maximum root tip depths (averaged across cylinders per species ± SD) at harvest. Corresponding root tip temperatures were derived from the temperature gradients (polynomial functions, calculated for 12 h for RER and for the last five treatment days prior to harvest). # Hourly minimum and maximum temperatures for n = 26-30 single roots per species during root formation. Table 2. RER, root traits, shoot traits and biomass fractions (mean ± SD). P-values from one-way ANOVAs, testing trait as dependent variable between cold and warm treatment for each species. Significance differences at ***P < 0.001; **P < 0.01; *P < 0.05. RMF 0.04 ± 0.02 0.24 ± 0.07 *** 0.16 ± 0.08 0.42 ± 0.11 *** 0.17 ± 0.08 0.33 ± 0.05 *** 0.14 ± 0.05 0.24 ± 0.03 ** Leaf area (cm 2 ), leaf dry mass (g), SLA (m 2 g −1 ) and also the total above-ground dry weight (g) were significantly smaller and lower, respectively, in cold compared to warm soils except for Ranunculus. Ranunculus showed a slightly reduced leaf area in the cold but the leaf mass, SLA and total above-ground dry weight did not differ between the temperature treatments. Leaf mass fraction (LMF) was not affected by the temperature treatments. Unexpectedly, stem mass fraction (SMF) was higher in Ranunculus and Tussilago under cold treatment but slightly lower in Rumex and unaffected in Poa (Table 2). Root anatomy For each species, the deepest roots were selected for the anatomical assays. Nevertheless, the number of root replicates per species dropped as only perfectly longitudinal sections of roots were further processed (n = 5 for Poa and Ranunculus; n = 7 for Rumex and Tussilago). For Poa, the hourly maximum root tip temperatures of the selected roots were never higher than 2 °C, for Rumex (except one root), Ranunculus and Tussilago (except one root), root temperatures never surpassed 3 °C. The cell density in the root tip, counted right after the root meristem initials (root apex) in an area of 0.2 × 0.2 mm, was similar in cold and warm treatments (Table 3) and these meristematic cells had diameters between 8 and 10 µm. Unexpectedly, the low temperature had no significant effect on the final cell length in Poa and Rumex, but a trend to shorter cells was found in cold-treated Ranunculus and in Tussilago (Table 3). The distance-to-apex data revealed that cell elongation was strongly reduced by low temperature in all three forb species, but not significantly in the grass species (Fig. 5). Thus, in cold-treated roots, cells remained in the small (meristematic) state over a longer distance from the apex compared to warm-treated roots, but counterexpectation, reached final cell length over a shorter distance from the apex (Fig. 6). Yet, there were much fewer elongating cells in the elongation zone than in the warm-treated roots in which cells reached final cell length over a longer distance, and there were more of these elongating cells (Fig. 6). The staining with fluorescent berberine-aniline blue employed here was effective to detect xylem lignification in the thin sections of the roots. The low temperature treatment retarded the lignification of the xylem, indicating a lower rate of cell differentiation in the cold treatment. The first lignified xylem elements (fluorescent signal observed through light microscopy) emerged at a greater distance from the root apex in cold-grown roots (Fig. 5). In Rumex roots, the species that stopped growing at relatively warmer temperature, the lignified xylem was detectable at a similar distance from the apex in both temperature treatments. The degree of lignification, measured as lightness (L-channel value) of the fluorescent xylem, showed an overall higher sensitivity than the direct visual observation in the microscope images. The lightness values were significantly lower and remained lower with increasing distance from the apex under cold conditions for three species (trend only in Ranunculus; Fig. 7, Table 4). Both Ranunculus and Poa showed brighter fluorescence signals than the other two species, suggesting a higher degree of lignification compared to Tussilago and Rumex ( Fig. 7; see Supporting Information- Fig. S4). Discussion We grew four alpine taxa under typical alpine climate for above-ground plant tissues but at two tightly controlled, contrasting root zone temperatures. Temperature variability in the top 30 mm was substantial due to solar radiation but below that level, temperatures decreased sharply with increasing soil depth in the cold treatment. At soil depths between −40 and −60 mm, where most cold-treated roots reached the cylinder walls, temperatures were mainly below 5 °C [see Supporting Information-Figs S1-S3]. Lowest tip root temperatures during RER measurements were 0.7 to 1.2 °C and roots Table 3. Cell density in an area of 0.2 × 0.2 mm close to the root apex and the final cell length reached in >5 mm distance from the root apex (mean ± SD). Number of replicates (from different individuals): Rumex: n = 4; Ranunculus: n = 5; Tussilago: n = 6; and Poa: n = 6. # Trend for shorter cells in Ranunculus (P = 0.09) and Tussilago (P = 0.06) under cold treatment. of all four alpine species were still capable to elongate at these low temperatures. Rumex To delineate the physiological minimum temperature for root growth (sensu: root formation), we took the single deepest root per species formed during the experiment and that root position corresponded to 1.0 °C in P. alpina and between 1.0 and 1.2 °C for the three forb species taking the hourly maximum temperature during Table 3). the period of the single deepest root formation into account, the critically low temperature for root growth is 1.4 °C ( Table 1). Given that one single deepest root may not fully represent the minimum temperature threshold for the species, we calculated a mean of >20 single deepest roots per species and their corresponding root temperatures. Poa had the lowest mean temperature threshold with 1.5 °C, followed by 1.9 °C in T. farfara, 2.0 °C in R. glacialis and 2.4 °C in R. alpinus. However, the hourly maximum temperatures for this mean per species covered a much higher temperature range since not all deepest roots per cylinder reached the depth beyond −60 mm where temperature fluctuations became small (Table 1). The temperatures thresholds for root growth observed here are slightly lower than those reported for montane tree taxa (Schenker et al. 2014), arctic plants (Bliss 1956;Ellis and Kummerow 1982) and our previous estimates. These were based on longer root observation intervals (4 days) for Ranunculus and Poa (Nagelmüller et al. 2016a), which arrived at thresholds close to 2 °C for these two species. The root systems that developed in the cold soil profiles were highly retarded and also AGB was negatively affected compared to controls, except for R. glacialis, showing reduced leaf area but not significantly lowered leaf mass. Biomass allocation expressed as SMF and RMF was reduced under the cold treatment, except LMF. Unexpectedly, SMF was significantly higher in Ranunculus and Tussilago under the cold treatment. The higher SMF could be an effect of delayed leaf unfolding, causing petiole mass to contribute to the higher stem fractions (in terms of their function, petioles were considered to belong to the stem fraction). Compared to the earlier in situ experiment with the species R. glacialis and P. alpina (Nagelmüller et al. 2016a), the low temperature treatment was more severe here, which contributed to the significantly negative effects on AGB that was not observed in that earlier work. Table 3 and for the linear models, see Table 4). It is a crucial issue in studies that aim at defining thermal thresholds for growth that temperatures do not fluctuate and temperature sensors embody the needed temporal and spatial resolution as well as the accuracy. Temperature means may be fully misleading, particularly, if they include periods with higher temperatures that might be sufficient for plants to grow. In the present study, we always considered the hourly maximum temperatures during the corresponding observation period (RER and root formation period, respectively). Counter-expectation, the cell elongation zone in root tips was not longer but shorter in cold-grown roots as we measured a shorter distance from the root apex to the first fully elongated cells (Fig. 6). We explain this observation by the fact that the sharp temperature gradient in the cold treatment did not permit root tips to expand beyond a certain temperature. Similar observations emerged in Arabidopsis thaliana roots exposed to 4 °C: root apices were deformed and growth zones shortened, causing a swelling in the primary roots (Plohovska et al. 2016). These authors explained the negative effect of low temperature on root elongation to be associated with impaired organization of the cytoskeleton, particularly microfilaments. Also in an earlier study on roots of three cultivars of winter wheat which differed in frost tolerance, Abdrakhamanova et al. (2003) related the changes in the microtubuli organization (especially, the disassembling of microtubuli during frost events) to the capability of roots of the tolerant cultivar to recover from frost and to grow at 4 °C. We also assume feedback regulation from the elongation zone to the meristematic zone that causes a cessation of further cell production at such minimum temperature well known from root growth kinematic studies (Silk 1992;Beemster and Baskin 1998;Rost 2011;Kumpf and Nowack 2015). Interestingly, the most low temperature tolerant species in terms of root elongation and rooting depth (and the only monocot), P. alpina, almost retained the proportion between the length of the cell elongation zone and the meristematic zone. The higher sensitivity of cell elongation compared to cell division at extremely low positive temperature causes the pressure for root tip progression into deeper (and colder) soils to cease and thus reduces RER close to zero. The temperature limit for cell elongation can be expected to occur at slightly higher than the minimum temperatures estimated from root tip position (Table 1) because the end of the elongation zone was between 2 and 4 mm above the tip, corresponding to ca. 0.1-0.2 K higher temperature. While cell elongation was clearly limited, the root apical meristem was still able to produce new cells, which accumulated for a longer distance from the apex and stayed small (in meristematic size). In the warm treatment, root cells at a similar distance from the apex kept elongating to final cell length (Fig. 6). These findings are in line with results of earlier studies that showed that growth restriction at low temperature does not start with an inhibition of cell division and cell production (Francis and Barlow 1988;Körner and Pelaez Menendez-Riedl 1989). Yet, at some point (in time and/or space), cell production must be downregulated through feedback from limited cell elongation and cell differentiation. We assume that the limitation of cell enlargement and the associated cell differentiation process are critical for root growth at very low temperature. Xylem differentiation may play an important role and lignification is a potentially critical candidate. The lignification of conduits is essential for the functionality of the xylem, and only a tight conduit system can contribute to the needed turgor pressure required for soil penetration. In the low temperature treatment, lignified, and thus functional xylem vessels, became visible only at a greater distance from the root apex, and the lignin signal was less intense, although these cells had more time (in the sense of tissue development) to accumulate lignin in the cell walls. The limited lignification of cold-grown roots may contribute to the fragility and overall 'glassy' texture of the whitish root tissues produced below 5 °C. Similar morphological changes in cold-treated roots were reported by Schenker et al. (2014) and Nagelmüller et al. (2016a). However, the biochemical processes underlying the inhibited lignification are still unclear, especially, whether the synthesis of lignin and its precursors, and/or the lignin deposition (polymerization) are affected under cold temperature, awaits a further explanation. Cold acclimation in plants (to chilling, positive temperatures) has often been associated with increased lignin contents in different plant organs including roots (Cabane et al. 2012 and citations therein). On the other hand, Donaldson (2001) reported that lignification of the secondary wall of latewood tracheids was often incomplete at the onset of winter, thus suggesting that lignification is sensitive to temperature. The lower temperature threshold for xylogenesis in the alpine Rhododendron shrub (2.0 ± 0.6 °C) than in conifers at the treeline (4-5 °C; Rossi et al. 2008) has been interpreted as a consequence of exposure to cooler microclimate of the alpine shrub (Li et al. 2016), particularly, at nights when radiation losses are high and convective heat exchange is low. The critically low temperature for lignification in the alpine herb and grass species observed here corroborated this minimum temperature and may be a common threshold for lignification in many cold-adapted angiosperms. Conclusions Roots grown at temperatures between 1 and 5 °C showed strongly reduced elongation rates so that these roots contributed very little to the entire root system compared to control roots grown at 10 °C. Accordingly, total root biomass was substantially reduced and hardly any secondary roots were formed at temperatures below 5 °C. Temperatures in the range of 0.8 to 1.4 °C are critically low temperature thresholds for root formation in the four studied alpine plant species. The terminal zones of root tips exposed to such temperatures showed clearly inhibited cell elongation and xylem lignification. We conclude that cell differentiation and lignification are the crucial processes that prevent any further extension of root tips into colder soil space and limit tissue formation in cold environments. Figure S1. Frequency distribution of the hourly soil temperatures in the six sensor depths in the four baths (three cold and one warm control bath) during the 29 treatment days. Figure S2. The temperature course of the seven temperature sensors in the cold treatment cylinders (bath 2) during the period of root elongation measurements (RER). Figure S3. Estimation of mean root tip temperature for a RER per 12 h from soil depth and soil temperatures (based on polynomial regressions). Figure S4. Snap shots of longitudinal cuts of warm-and cold-treated root tips showing the reduced lignification.
2018-04-03T01:09:49.815Z
2017-10-19T00:00:00.000
{ "year": 2017, "sha1": "da4d7d40358298df3218f3c453858d3bf278ec8b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/aobpla/article-pdf/9/6/plx054/22001491/plx054.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da4d7d40358298df3218f3c453858d3bf278ec8b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231701522
pes2o/s2orc
v3-fos-license
Coronavirus disease 2019 (COVID-19) research agenda for healthcare epidemiology This SHEA white paper identifies knowledge gaps and challenges in healthcare epidemiology research related to coronavirus disease 2019 (COVID-19) with a focus on core principles of healthcare epidemiology. These gaps, revealed during the worst phases of the COVID-19 pandemic, are described in 10 sections: epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies. Each section highlights three critical healthcare epidemiology research questions with detailed description provided in supplementary materials. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, large-scale studies and health outcomes research. Research gaps and challenges related to nursing homes and social disparities are included. Collaborations across various disciplines, expertise and across diverse geographic locations will be critical. (Received 7 January 2021; accepted 7 January 2021) The emergence and rapid worldwide spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has led to substantial social and economic disruption and loss of life. Throughout the pandemic, healthcare providers, hospitals, and health systems have worked tirelessly to provide safe care for patients while simultaneously ensuring safety for frontline providers. Early efforts to prevent transmission relied on prepandemic evidence and rapidly emerging novel data. Through the first 9 months of the pandemic, >60,000 articles were published on SARS-CoV-2 and coronavirus disease 2019 (COVID- 19), not including the now ubiquitous preprints. 1 As a result, the scientific community already has learned a great deal about COVID-19, leading to evolving guidelines for treatment, testing, and prevention. [2][3][4][5] Despite considerable progress, the community still has much to learn. As the adage goes, "The more you know, the more you realize how much you don't know." This SHEA white paper identifies remaining knowledge gaps and challenges in healthcare epidemiology research related to COVID-19. These gaps are described in 10 sections : epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies (Table 1; Supplementary Tables 1-4 Table 5 online). Research gaps and challenges related to nursing homes and social disparities are included. Epidemiology Understanding the epidemiology of SARS-CoV-2 is critical to minimizing the burden of the COVID-19 pandemic in healthcare settings. Epidemiologic research on individual as well as populationlevel transmission dynamics, risk factors for virus acquisition, and predictors of severe disease outcomes can inform healthcare capacity planning, clinical care, and infection prevention practices within healthcare settings. Three research domains identified below (Supplementary Table 1 online) represent priority areas with unanswered questions in the epidemiology of disease relevant to healthcare and infection prevention: Priority area 1: Understand heterogeneity in epidemiology and transmission dynamics of SARS-CoV-2. Priority area 2: Define characteristics and impact of asymptomatic/ pre-symptomatic patients infected with SARS-CoV-2. Priority area 3: Characterize risk factors that lead to severe disease outcomes including age, sex, and race, with special emphasis on health disparities, socio-economic status, and comorbidities. The heterogeneity in COVID-19 transmission dynamics is typified by both strain differences and "superspreading events" in which a small number of individuals account for a large fraction of transmission. Recent reports of variant strains being associated with an increased facility for spread as well as higher viral burdens in infected individuals require further explanation. Identifying the causes of superspreading in healthcare settings is key, especially in various settings that house high-risk populations such as in nursing homes and other long-term care facilities. [6][7][8] Also critical is an enhanced understanding of viral transmission patterns through the air (in more depth than simply droplet versus airborne transmission) that can support evidence-based PPE, physical distancing, and ventilation policies, which currently vary across healthcare settings. 6 With continued shortages in PPE, identifying the relative risk of occupational versus community exposures in HCP is essential to identify failures in occupational safety and implement comprehensive interventions to safeguard HCP. One unique epidemiologic feature of COVID-19 is the sheer number of asymptomatic or presymptomatic cases reported, which has ranged from 1% to >50% and has resulted in widespread increases in SARS-CoV-2 testing. 6,[9][10][11][12][13][14][15] Several studies have documented high viral loads in asymptomatic individuals, which suggests that they could be significant contributors to transmission and that symptom screening alone cannot contain transmission. 9,16 This issue highlights the need for studies in children, who have lower rates of illness and hospitalization and gather in school and daycare settings (Supplementary Table 5 online). 17 An understanding of the role of asymptomatic or presymptomatic individuals in transmission will influence societal considerations regarding opening schools, resuming economic activities such as opening gyms, and allowing social events such as having small and large gatherings. Finally, studying the epidemiology of severe and post-acute disease can inform patient and HCP safety protocols, clinical practice guidelines, vaccine recommendations, and concurrent management of other chronic conditions. To date, the burden of SARS-CoV-2 infection has had a disproportionate impact on racial and ethnic minority communities, frontline workers, and individuals with underlying conditions, such as diabetes, hypertension, obesity, and heart, lung, or kidney disease. 7,[18][19][20][21] Prioritizing research on the underlying societal and biological risk factors and optimal prevention and treatment for these high-risk groups is important. Furthermore, the extent and burden of long-term cardio-metabolic, respiratory, neurological, and psychological sequelae, including among asymptomatic individuals or those with mild disease, requires further study. 8,22,23 2. Outbreak investigation COVID-19 poses a unique challenge in outbreak investigation stemming from its novelty coupled with the rapid worldwide spread into all sectors of society, including into diverse healthcare Priority area 2: Determine optimal personnel, expertise, and training required to conduct rapid SARS-CoV-2 and other outbreak investigations. Priority area 3: Identify optimal resources and technology (reporting tools, software and hardware) to support outbreak investigations. Several studies have highlighted the need for adequate resources, infrastructure, and personnel with expertise and leadership support to conduct timely, evidence-based infection prevention activities, including outbreak investigations. [25][26][27][28] HCP that work in overtaxed health care systems, faced with a rapidly spreading outbreak, as well as confusing and changing guidance, are at an elevated risk for burnout and moral distress. 29 These challenges to conducting rapid and effective outbreak investigations are further amplified in nursing homes and small to mid-sized hospitals. 30 Compared to larger hospitals, smaller hospitals face unique challenges, including infection preventionists (IPs) with other noninfection-related responsibilities, lack of specific IP training, lack of data synthesis and reporting tools, and high personnel turnover. 28,[31][32][33] For example, rapid reporting systems can provide benchmarks to improve early outbreak detection in hospitals, nursing homes, and other healthcare settings leading to early interventions to curtail the outbreak. 34 Technical knowhow and expertise in conducting outbreak investigations are important to identify key characteristics of the outbreak, including: populations being most affected; unique presentations that could vary by age, gender, race, comorbidities, or frailty; and patterns of transmission. Such expertise should also provide institutions with rapid, simple, systemic and culturally appropriate interventions. Surveillance strategies Robust surveillance of COVID-19 is critical to designing effective strategies for timely identification of COVID-19, limiting the spread of disease, and informing public health priorities and responses. Three research domains identified below (Supplementary Table 1 online) represent priority areas with unanswered questions in surveillance strategies relevant to healthcare and infection prevention: Priority area 1: Determine optimal and rapid surveillance strategies to accurately define the scale and depth of COVID-19 and its impact on populations, communities, and individuals. Priority area 2: Determine and evaluate high-yield, cost-effective, and efficient testing-based population surveillance strategies. Priority area 3: Identify highest risk populations for targeted interventions based on their age, gender, race, comorbidities, settings, and community spread. Reverse transcription polymerase chain reaction (RT-PCR) tests for SARS-CoV-2 can remain positive up to 3 months and do not directly translate to transmissibility. Viable virus has often not been found beyond 10 days in immunocompetent hosts, barring some instances. [35][36][37][38][39][40] As a result, use of RT-PCR results for surveillance would overestimate COVID-19 incidence and prevalence, leading to misclassification of community-level burden. Large-scale longitudinal surveillance studies are needed to evaluate duration of test positivity (ie, RT-PCR, antigen, and serology) and risk for COVID-19 reinfection, with subgroup evaluation by symptoms (eg, asymptomatic, mild-to-moderate symptoms, and hospitalized patients). 39,40 Although manufacturers report high sensitivity and specificity against assay controls, clinical sensitivity and specificity for COVID-19 infection is relatively unknown. In some instances, sensitivity has been reported to be as low as 70%, depending on the quality of the specimen obtained and the time at which the sample is taken during a patient's illness. 41,42 Studies are needed to evaluate the clinical performance characteristics of COVID-19 testing tools against the sensitivity and specificity of full-symptom screening, including early indicators of infection. Results from these studies will inform optimal sentinel surveillance strategies for large populations through en masse testing, such as pooled saliva sample testing or sewer line sampling. Large-scale surveillance data within a wide variety of community and work settings and activities can lead to identification of groups and locations associated with high risk for transmission, leading to improved strategies for prevention and PPE use. Specific attention is needed within healthcare settings including nursing homes; assisted living facilities; group homes; factories and food processing plants; jails and prisons; and places of education such as schools, colleges, and universities. Supplementary Table 5 (online) highlights additional considerations relevant to pediatric surveillance, including surveillance for the multisystem inflammatory syndrome in children (MIS-C). Exposure risks may be further defined through novel surveillance tools (eg, personal exposure monitors and tracking apps). Isolation precaution practices Standard and transmission-based precaution practices are cornerstones of preventing transmission of infectious pathogens and ensuring HCP and patient safety across all healthcare settings. 43 The US Centers for Disease Control and Prevention (CDC) developed and updated interim infection prevention and control recommendations regarding the use of transmission-based isolation precautions when caring for patients with suspected or confirmed SARS-CoV-2 infection in healthcare facilities. 44 This guidance focuses on HCP and patient screening, testing protocols, patient placement and management practices, use of PPE, and family/visitor interactions. However, as the COVID-19 pandemic continues to unfold, so does the need for a more rigorous evidence base to inform isolation practices and to assist healthcare facilities with effectively implementing public health guidance. Three research domains identified here (Supplementary Table 2 online) represent priority areas with unanswered questions in isolation precautions relevant to healthcare and infection prevention: Priority area 1: Determine when and how to initiate transmissionbased isolation precautions for COVID-19. Priority area 2: Determine how to optimize management and care delivery while isolation precautions are in place. Priority area 3: Determine when to discontinue COVID-19 isolation precautions and reinstitute isolation in cases of possible reinfection. COVID-19 has a wide variety of clinical presentations ranging from asymptomatic to severely ill. 45 Healthcare facilities use various criteria based on individual signs and symptoms to determine when to test individuals for COVID-19 and initiate isolation precautions while awaiting results, and they use various testing protocols to detect asymptomatic and presymptomatic individuals. Although these are critical strategies for stopping COVID-19 transmission, questions remain about the effectiveness of various screening and testing protocols to initiate isolation precautions practices and reduce transmission risk. Once in isolation, use of PPE (ie, gloves, gown, mask, N95 respirator or power air-purifying respirator [PAPR], eye protection) for known or suspected COVID-19 patients can pose challenges to the delivery of care and can potentially delay recognition of other healthcare-associated conditions. 46 Furthermore, COVID-19 isolation can be problematic for hospitalized patients and nursing home residents due to the use of equipment that can inhibit visual and auditory cues and visitor restrictions resulting in less family contact and support. The inability to connect with family members is one of the most distressing consequences of COVID-19 isolation, with a potential for long-lasting psychological consequences in survivors. 47,48 Research to better understand, identify, and test approaches to mitigate the psychological, physical, and care delivery challenges related to COVID-19 isolation precaution practices, including the benefits and unintended consequences of family and visitor policies and restrictions, are needed. [46][47][48][49] Recommendations for discontinuing isolation have been based on symptoms, test results, and time from positive test. 37,50,51 Discontinuation of isolation precautions allows individuals to engage in normal and/or recovery-focused activities. To do so safely, however, discontinuation policies must also take into account the risk of secondary transmission. In other words, balancing risk of transmission events if isolation is discontinued too early against the risks of staying in isolation too long. Thus, research on when and how to safely discontinue isolation that balances public health and patient priorities is needed. Improved understanding of how continued positive test results correlate with transmission will also help guide isolation discontinuation policies. 52 These research questions and resultant policy implications impact HCP directly. Early studies suggest that interventions, such as a triage committee and team decision making, may decrease the perception of personal culpability for untoward patient outcomes. 29 Thus, meaningfully enhancing HCP engagement in developing research questions and in decision-making processes are critical to reducing moral distress and burnout. Personal protective equipment (PPE) General recommendations for HCP use of PPE are available from the CDC; expert groups have provided additional recommendations for use of PPE in crisis scenarios. 5,44 Both CDC and expert guidance has been largely based on limited data extrapolated from other viral infections (eg, influenza, SARS-CoV-1) and/or studies with significant biases limiting generalizability. Research domains identified below (Supplementary Table 2 COVID-19 is generally thought to spread primarily through respiratory droplets; thus, the current role of PPE is aimed at decreasing droplet transmission. Masks are used as the cornerstone of source control (symptomatic or asymptomatic person with COVID-19). 53 However, it is possible that COVID-19 transmission can occur through the eyes, either by direct droplet inoculation or via autoinoculation. In this setting, eye protection (face shields or goggles) has also been recommended and may play a key role in infection prevention. In a 2014 study that utilized a cough simulator and breathing worker simulator to model droplet transmission, face shields prevented exposure to droplets; however, masks were not used. 54 A meta-analysis indicated that physical distancing, masks, and eye protection decreased the odds of COVID-19 transmission; however, the relative risk reduction of eye protection plus a face mask for COVID-19 has not been well described. 55 Additionally, the benefit of face shields alone in source control of an asymptomatic or presymptomatic patient is not known. Research on compliance with PPE guidance in prior outbreaks has focused on methods of delivering training. 56 In the Ebola virus disease outbreak, a human-factors engineering approach to training and ensuring appropriate donning and doffing decreased ambiguity, explored failure modes, and enhanced teamwork to improve compliance with PPE guidance. 57 Thus, other behavioral, adaptive, cultural, systemic or other factors may play a role in adherence to best-practice PPE use. Studies that utilize methods from healthcare epidemiology, infection prevention, human factors engineering, and medical sociology are needed to identify and mitigate (or enhance) sociobehavioral, adaptive, contextual, and human factors that impact appropriate PPE use. In the current pandemic, where mask-wearing has become politicized, understanding these factors may be even more essential. Understanding how to improve appropriate PPE use, through understanding the socio-behavioral, adaptive, and contextual reasons for not following appropriate PPE guidance, as well as human factors associated with appropriate PPE use, could improve PPE adherence among HCP. Finally, PPE use has been complicated by shortages, which resulted in institutions requiring healthcare providers to reuse single-use items such as N-95 masks. The shortage of essential supplies can increase anxiety and fear in those who need them; however, the impact of PPE shortages on future use of PPE is a consideration that requires further investigation. 58-60 Environmental contamination and disinfection Surface contamination with SARS-CoV-2 has been frequently described, 61-74 but the role of environmental contamination with SARS-CoV-2 in transmission in healthcare settings such as hospitals and nursing homes remains unclear. Three research domains identified below (Supplementary Table 2 online) represent priority areas with unanswered questions concerning environmental disinfection in healthcare settings: Priority area 1: What are the risks associated with environmental contamination with SARS-CoV-2 for HCP and patients? Priority area 2: What are optimized methods for identifying environmental contamination with SARS-CoV-2? Priority area 3: Determine the optimal methods for disinfection of healthcare environments. Defining the risk associated with surface transmission is essential in assessing the potential benefit of decontamination and disinfection strategies. Although previously published studies demonstrate potential for fomite transmission, additional studies are needed to assess risk factors for both contamination and infection associated with contact with contaminated surfaces. In addition, these surface contamination studies were primarily cross-sectional studies and case reports using PCR detection of viral RNA on surfaces in COVID-19 units. Thus, while standardized methods have been proposed for evaluating surface contamination, evidence-based determination of optimal sampling strategies will enhance future work. 75 Subsequent studies should include assessment of detection and correlation of findings with infectivity. Viral culture may be more useful in determining risk of infectivity, but there is inadequate infrastructure to broadly expand study of environmental contamination using this method. 68 Enveloped viral surrogates including mammalian viruses and bacteriophage should be integrated into disinfection assessments. Establishing the infrastructure to define risk of surface contamination for other high-consequence pathogens is needed for future pandemic preparedness. If surface transmission of COVID-19 is described, then reducing this risk within healthcare settings is necessary to provide care for vulnerable populations and to protect HCP. Implementation challenges can be substantial, but leadership support appears helpful. In a national study conducted in Thailand in 2014, Apisarnthanarak et al 76 found that good-to-excellent hospital administration support for the infection control program was significantly associated with greater adherence to implemented environmental control and disinfection protocols. Thus, identifying the incremental benefit of enhanced disinfection strategies, such as ultraviolet germicidal irradiation, vaporized hydrogen peroxide, and others compared to commonly available disinfectants, will inform routine disinfection practices. 72,77 The methods for evaluating the impact of such disinfection strategies should focus on clinical outcomes and laboratory methods that predict infectivity. Drug and medical supply-chain shortages The supply chain for drugs and medical supplies is global; most drugs and medical supplies are made outside the United States. 78 Early in the pandemic, there were significant shortages of PPE, ventilators, and materials needed for laboratory detection of COVID-19. 79 Drug shortages are situations in which patients are unable to access clinically interchangeable versions of regulated prescriptions due to supply limitations. 78 Over the last decade, the number of drug shortages has increased dramatically. 80,81 COVID-19 revealed consequences: manufacturers closed, governments prohibited drug and supply export, and patients and organizations stockpiled drugs. 29,[82][83][84] Of drugs consumed in the United States, 90% of raw active ingredients (active pharmaceutical ingredients [API]) are made in foreign facilities, 80% in China and India). 78,85 Thus, supply-chain shortages are a complex global issue and can be influenced by geopolitical issues, trade, civil unrest, weather, and pandemics. 82,86 The full extent of the impact of the COVID-19 pandemic, however, on the drug and medical supply chain is unknown. It is important to recognize and understand disruption in these supply chains to prepare for future pandemics and other global emergencies. Three research domains identified below (Supplementary Table 3 online) represent priority areas with unanswered questions concerning drug and medical supply shortages: Priority area 1: Define the extent of drug and medical supply chain shortages caused, directly or indirectly, by the COVID-19 pandemic. Priority area 2: Identify methods to disseminate best practices in order to optimize patient care in the face of drug and medical supply chain shortages, from local protocols to international policies to help mitigate future shortages. Priority area 3: Characterize clinical consequences of drug and medical supply-chain shortages. Recognizing disruption that can occur as a result of drug and medical supply-chain disruption, legislation included in the March 2020 "Coronavirus Aid, Relief, and Economic Security (CARES) Act" requires drug manufacturers to report the anticipated duration and the problem leading to a shortage. 80 However, studies on the extent of these disruptions and effectiveness of such policies during COVID-19 and after the pandemic is unknown. Also unknown are the serious outcomes related to these shortages, including worsening illness and premature death. [87][88][89] In addition, little is known about how drug shortages impact clinicians who prescribe medications to treat critically ill patients. HCP reported anxiety about their inability to provide competent and evidence-based care during COVID-19. 60 Minority populations and vulnerable populations may be disproportionately affected by these shortages. Drugs and other supply shortages may pose ethical dilemmas when decisions must be made to treat one patient over another. 90 The impact of shared decision making and triage committees, which remove the responsibility from the individual provider, on HCP mental health and resiliency should be systematically evaluated. 29,60,90 Addressing these research questions related to supply-chain disruption and the impact of drug and medical supply shortages will help preparations for future global emergencies and will inform national and international policies aimed toward decreasing the impact of drug shortages on patient outcomes. Antimicrobial stewardship COVID-19 caused a rapid shift in the delivery of health care, including the suspension of elective procedures and transition of in-person visits to virtual encounters. 91,92 Changing health care delivery may lead to an increase or decrease in antibiotic prescribing, depending on the setting and patient population. Three research domains identified below (Supplementary Table 3 Priority area 3: Develop and implement optimal antimicrobial stewardship program (ASP) strategies to improve antimicrobial use and patient outcomes while adapting to changing healthcare delivery during COVID-19. Although a decrease in the number of admitted patients may lead to a reduction in overall antimicrobial use, several studies have suggested a large percentage of COVID-19 patients presumptively receive antibiotics to treat the potential that the infection is bacterial or that a superimposed bacterial infection is leading to a greater severity of COVID-19. [93][94][95] Similarly, changes in the volume of healthcare access (decreased hospitalizations and outpatient visits) during the pandemic will limit longitudinal comparisons due to altered denominators for typical use metrics, patient bed days of care (for acute care), and in-person visits (for outpatient care). Additionally, with routine pediatric care transitioning to predominantly telehealth visits, the pandemic's impact on outpatient antibiotic prescribing practices for children remain unexamined (Supplementary Table 5 online). Finally, the downstream impact of changes in antimicrobial prescribing on antimicrobial resistance and Clostridium difficile are unknown. Typically viral respiratory infections have been associated with an increased risk of bacterial or fungal coinfections, which substantially increases risks of morbidity and mortality. [96][97][98] However, data on coinfections in COVID-19 have been sparse and heterogeneous, with estimates ranging from 3% to 30%. [99][100][101][102][103][104][105][106][107][108] To inform antimicrobial treatment, we must understand risk factors and timing for the development of coinfection. For example, the risk of coinfection related to comorbidity (eg, immunosuppression) or exposure (eg, hospitalization, ventilation, device-placement) is unknown. 97 Furthermore, quantification of coinfection is limited by diagnostic difficulties including (1) distinguishing colonization from infection; (2) improving diagnosis of coinfection versus alternative causes of decompensation (eg, acute respiratory distress syndrome from COVID-19); and (3) limited respiratory culture data due to SARS-CoV-2 transmission concerns. 109 The ongoing COVID-19 pandemic has drastically changed the way that ASP teams interact with patients and other healthcare providers. 110,111 Although the focus of ASPs may have shifted during the pandemic, improving antimicrobial use and stemming the tide of the development of antibiotic resistance remain at the core. 110,111 ASP teams are now ubiquitous in US hospitals, but ensuring that all hospitals have adequate infectious disease expertise on their teams may require novel approaches. 112 Simultaneously, the responsibilities of ASPs have increasedwith many playing an active role in COVID-19 management, such as guideline development or remdesivir allocation. 113,114 Virtual strategies, such as 'tele-stewardship' and nimble regional or national 'hotlines,' particularly for small and rural hospitals, need to be evaluated and implemented. Likewise, although some nursing homes have established comprehensive nursing home ASPs, a required condition of participation by the US Centers for Medicare and Medicaid Services (CMS), many would benefit from additional support. 95 Healthcare personnel safety and occupational safety The COVID-19 pandemic has raised concerns about HCP safety. Research is needed to identify strategies to protect HCP from acquiring SARS-CoV-2 at work and to support HCP from physical, psychological, social, and organizational challenges related to the pandemic. 58,59,76 Three research domains identified below (Supplementary Table 4 online) represent priority areas with unanswered questions concerning HCP and occupational safety: Priority area 1: Define risks that increase HCP exposure to and acquisition of SARS-CoV-2 and interventions that can mitigate these risks. Priority area 2: Determine optimized strategies to protect HCP emotional and psychological health. Priority area 3: Determine impact of social and organizational strategies to maintain the health and wellness of HCP. Understanding the factors that increase HCP risk of acquiring COVID-19 is essential to develop an evidence-based infection prevention program. These factors may include attributes of the patients under the HCP's care (eg, clinical symptoms, comorbid conditions),aspects of the care delivered (eg, procedures performed, duration of contact, number of patients under their care during a shift, preoperative screening), HCP practices (eg, PPE utilized, years of experience), and work site (eg, leadership support, control over practice). In addition, understanding individual HCP factors that increase the likelihood of an infected HCP developing more severe disease and adverse outcomes will help determine which HCP may need additional protections in place, such as furlough, reassignment from the care of COVID-19 patients, or ongoing testing at a set interval during hospitalization. Research on costs and effectiveness of policies, such as preoperative and on-admission screening, can inform practice, improve HCP safety, enhance HCP confidence in safety processes, and reduce staffing challenges. 29 Controversy persists about determining what medical procedures may allow opportunistic airborne transmission of pathogens traditionally considered to follow droplet transmission. Key considerations include the size of particles generated during specific procedures, and the ability of the pathogen to survive in small particles, and the infectious dose for the pathogen (amount of virus carried in small airborne particles sufficient to cause infection if inhaled). These questions have important implications for air handling and PPE selection recommendations to minimize transmission risk in healthcare settings. Identifying policies that support the social, emotional, and economic needs of HCP is critical to maintaining workforce resilience and decreasing presenteeism, burnout, and turnover during the protracted COVID-19 pandemic. 60 Developing an evidence base to inform these policies requires understanding how the pandemic has affected HCP social, emotional, and physical health, finances, ability to care for families, and decisions to come to work. A mixedmethods implementation science approach can inform local, institutional, and national policies and practices to support HCP resiliency, job security, ability to isolate or quarantine effectively, and social support (eg, hazard and sick pay policies, housing options, and consistent childcare). 106 Return to work Recommendations for return to work following COVID-19 infection are largely derived from experience with other communicable diseases such as influenza and norovirus. COVID-19 has posed specific challenges in timely return to work strategies due to minimal data on the true transmission dynamics and nature of exposure risks during HCP-patient or HCP-coworker interactions. Little is known about which HCP roles or activities confer the highest risks of transmission and how this risk is modified by severity of prior illness or presence of lingering symptoms upon return. Three research domains identified below (Supplementary Table 4 online) address the urgent priorities for research to facilitate timely worker return while tempering the risks of premature return to work during active illness: 115 Priority area 1: Determine risk of SARS-CoV-2 transmission by returning HCPs to coworkers and patients, by HCP type and setting. Priority area 2: Determine the optimal criteria and modifications necessary for earliest safe return to work. Priority area 3: Determine the sociocultural impact of and strategies for successful return-to-work for HCP. As of November 2020, the US CDC recommends that HCP who have had COVID-19 return at 10 days from initial symptom onset (20 days if immunocompromised) if improved and fever-free for 24 hours without fever-reducing medications. 116 During a pandemic, sick leave of this duration, even when asymptomatic, could significantly impact healthcare system staffing. Many employers are considering the use of test-based strategies to shorten the window for HCP to return to work. Shared anecdotal experiences suggest a broad range of approaches for return to work across institutions and locations (eg, length of furloughs, strategies to address persistently PCR-positive recovered workers, coworker education, and acculturation to mitigate social stigma). The true risk of virus transmission to coworkers or others from asymptomatic or minimally symptomatic HCP to patients or other coworkers remains largely unknown. Research is needed to assess the relative importance of worker-related factors, such as viral viability in minimally symptomatic or immunocompromised HCP, effectiveness of PPE or physical barriers to mitigate transmission, and risk of acquisition among immunocompromised coworkers and patients. HCP workflow, social culture, and nature of coworker interactions may also affect the likelihood of transmission within the workplace. Finally, research addressing worker reintegration and actions to assuage social stigmas (eg, educational needs of workers, childcare) are vital to retaining a talented and prepared workforce. 90 Conclusion The SHEA COVID-19 research agenda is critical and ambitious. COVID-19 has exposed dangerous gaps in our understanding of the epidemiology, transmission, and individual as well as public health consequences of viral diseases. Global impacts on health, the economy, and progress have been felt in every population and country. The disease has disproportionately affected older adults, especially those living in nursing homes or long-term care facilities, racial minorities, and those with multiple comorbidities. Supply shortages have affected the health and well-being of HCP and have negatively affected care of those not infected with COVID-19. A well-planned, collaborative, comprehensive research agenda with careful, dedicated, and timely execution is a critical element to address the most important questions to more effectively limit outbreaks and pandemics. With the recognition that pandemics do not respect boundaries or economies, close collaboration between various disciplines is crucial. Research initiatives and trust between industrialized and developing nations are needed to address these critical questions in ways generalizable around the globe, including attention to capacity building, technology transfer, training resources, and aligning surveillance and prevention activities. This research agenda is a snapshot in time during the worst days of the COVID-19 pandemic and will certainly shift as the pandemic evolves and as vaccines and other therapeutics become available. Nonetheless, several priorities outlined have relevance to future infectious disease outbreaks and epidemics. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, largescale studies and health outcomes research. To undertake this work, funding organizations must make COVID-19 research their highest priority. We anticipate that the next decade will be crucial in developing the next generation of epidemiologists, IPs, researchers, and leaders.
2021-01-26T06:16:21.741Z
2021-01-25T00:00:00.000
{ "year": 2021, "sha1": "cc2e456fe4b96f942481ddc4d1735c6e7750fe77", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/58E199619306BF52B569DEC7DA62F37D/S0899823X21000258a.pdf/div-class-title-coronavirus-disease-2019-covid-19-research-agenda-for-healthcare-epidemiology-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1dfe83f161a6c2ac1a680fd7fab5797a1c43ae46", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
264997501
pes2o/s2orc
v3-fos-license
Experimental certification of contextuality, coherence, and dimension in a programmable universal photonic processor Quantum superposition of high-dimensional states enables both computational speed-up and security in cryptographic protocols. However, the exponential complexity of tomographic processes makes certification of these properties a challenging task. In this work, we experimentally certify coherence witnesses tailored for quantum systems of increasing dimension using pairwise overlap measurements enabled by a six-mode universal photonic processor fabricated with a femtosecond laser writing technology. In particular, we show the effectiveness of the proposed coherence and dimension witnesses for qudits of dimensions up to 5. We also demonstrate advantage in a quantum interrogation task and show it is fueled by quantum contextuality. Our experimental results testify to the efficiency of this approach for the certification of quantum properties in programmable integrated photonic platforms. INTRODUCTION Quantum computers are capable of solving problems believed to be effectively impossible classically, such as sampling from complex probability distributions [1], predicting properties of physical systems [2], and factoring large integers [3].The quantum computational advantage for these tasks is built on rigorous no-go results in computational complexity theory, showing a gap between quantum and classical resources for the same task that can be exponential.Research on quantum foundations also addresses quantum advantage, by asking the question: what type of quantum information processing cannot be explained classically?Answers broadly suggest a different kind of advantage, not in terms of computational power, but in terms of intrinsic classical limits to information processing. Advantage in quantum information processing pushes success rates in communication tasks [4][5][6], security of key distribution protocols [7], and success rates in discrimination tasks [8] beyond those that can be reached using only classical resources.This kind of advantage results from understanding quantum foundational aspects of nonclassical resources such as entanglement [9], coherence [10], Bell nonlocality [11] and contextuality [12], where classical-quantum gaps in explaining the phenomena can be predicted, bounding success rates in a quantifiable manner.For instance, phenomena that can be reproduced by noncontextual models [13] include interference [14], superdense coding [15] and Gaussian quantum mechanics [16], but it is possible to describe precisely when processing of quantum information allows an advantage over such seemingly powerful models in many tasks [4,5]. In this work, we propose and test families of coherence and contextuality witnesses in proof-of-principle device dependent experiments carried out with single-photon states processed by a programmable integrated photonic circuit.Such witnesses require the careful preparation of the system in a set of different states and then the estimation of a set of pairwise state overlaps [17][18][19][20].For this task, we encode qubits and qudits with dimensions up to five in a six-mode universal photonic processor (UPP) realized with the femtosecond laser writing technology [21]. We start by testing a recent quantum information advantage [19] for the task of quantum interrogation, first proposed as the celebrated Elitzur-Vaidman bomb-testing experiment [22].This task has had a profound impact in quantum foundations [23,24] that later converged into technical developments, such as the possibility of performing counterfactual quantum computation [25,26], the development of the field of quantum imaging with undetected photons [27,28], and high-efficiency interrogation using the quantum Zeno effect [29,30].We experimentally verify that the efficiency achievable by quantum theory cannot be explained by noncontextual models such as those of Refs.[14,15], as predicted in Ref. [19].We hence certify both the presence of this nonclassical resource in the device, and its ability to use it for information processing advantage. Although noncontextual models are capable of capturing some aspects of quantum coherence [15], in a way similar to how local models reproduce some aspects of quantum entanglement [31], coherence is still of utmost relevance for quantum information science.It plays a major role in Shor's factoring algorithm [32], and is crucial for the quantum advantage provided by linear-optical devices.One can then ask the question of what cannot be explained with coherence-free models. The recently established inequalities of Refs.[17,20] provide a precise answer, by rigorously bounding such models. We show theoretically, numerically, and experimentally that a family of inequalities introduced in Ref. [19] has a particularly interesting property: violations of such inequalities witness not only coherence inside the interferometers, but also the dimensionality of the information encoded.Since Hilbert space dimension is itself a resource, the question of what can be done only with qudits is of relevance for information processing.Some information tasks do require qudits [33][34][35][36], or have their security linked to the dimension [37], so an important research field is devoted to developing methods to guarantee lower bounds on the Hilbert space dimension attained by different physical systems [38][39][40][41][42]. The fact that this family of inequalities has violations only for coherent qudits marks a new paradigm for quantum coherence allowed by the basis-independent perspective [17,43].There exists quantum coherence that is achieved by qudits that cannot be achieved with qubits.Such a fact has no precedent from the resource theoretic perspective of basisdependent coherence [10].Coherence captured only with qudits was first considered in Ref. [44].We experimentally witness this proposed new form of coherence for qutrits, ququarts and ququints inside a six-mode programmable integrated universal photonic processor.We prove that the inequality violated by pure qutrits cannot be violated by qubits, complementing this result with numerical and experimental investigations, and we perform a similar analysis for our inequalities violated by ququart and ququint systems.In doing so, we extend the dimension witness result from Ref. [18] both qualitatively and quantitatively, making the best use of the flexibility and accuracy of our multi-mode processor. A. Theoretical framework Quantum coherence is commonly described as a basisdependent property.Given some space H describing a system and a fixed basis Ω = {|ω⟩} ω , any state ρ is said to be coherent if it is not diagonal with respect to Ω.It is possible to avoid basis-dependence by considering sets of states [43].Given any set of states ρ = {ρ i } n−1 i=0 , the entire set is said to be basisindependent coherent, or simply set-coherent, if there exists no unitary U such that ρ → σ = U ρU † = {U ρ i U † } n−1 i=0 , with every σ i = U ρ i U † diagonal. Witnesses of such a notion of basis-independent coherence were proposed in Ref. [17], building on the realization that set-coherence is a relational property among the states in ρ.Bargmann invariants [45,46] completely characterize all the relational information of any set of states.The simplest such invariants are the two-state overlaps r i,j = Tr(ρ i ρ j ), for ρ i , ρ j ∈ ρ.In the Methods section and Supplementary Note 1 we recall why the overlap inequalities of Refs.[17,20] serve as set-coherence witnesses.The first non-trivial inequality bounding coherence [17] was experimentally investigated in Ref. [18], and bounds the three overlaps of a set of 3 states: Violations of such inequalities represent witnesses of basisindependent coherence of ρ = {ρ i } 2 i=0 .However, as was shown in Ref. [20], this is also a witness of contextuality when we interpret each state either as an operational preparation procedure, or as a measurement effect.In Supplementary Note 2 we review in detail how these inequalities help to characterize contextuality. As part of our certification we perform a quantum information task known as (standard) quantum interrogation [22], that can be performed using a two-mode Mach-Zehnder interferometer (MZI) set-up, as depicted in Fig. 1a, and interpreted in light of our discussion about the connection between coherence and contextuality.For the purpose of testing our device, we will quantify the success rate of the interrogation task using the efficiency η given by where p succ is the probability of successfully detecting the presence of the object without it absorbing the photon, and p abs corresponds to the probability of absorption.In the Methods section we precisely describe the task, and how these probabilities relate to the MZI beam-splitting ratio.Ref. [19] = Experimental setup and universal photonic processor.a) Experimental setup.A pair of photons is generated by a SPDC source.One photon is detected as trigger.The second photon is sent in a programmable integrated UPP that can realize a generic unitary transformation.At the output of the chip we measure the two-fold coincidence between the photon in the chip and the trigger photon.They are detected by exploiting APDs.b) UPP internal structure.The optical circuit is a six-mode rectangular mesh of variable beam splitters and phase shifters, enabling the implementation of arbitrary 6x6 unitary transformations.Each variable beam splitter is actually a MZI structure with two 50:50 beam splitters and a phase shifter in between (see the inset).c) Programmable integrated UPP.The integrated device employed in the experiment is a UPP, realized by the femtosecond laser writing technique in an alumino-borosilicate glass substrate.Two fiber arrays are directly plugged in at the input and at the output of the interferometer.Thermo-optic phase shifters are patterned with the same technology on a thin gold layer deposited on the substrate.Electrical currents are supplied to the phase shifters through interposing printed circuit boards (not shown in figure for the sake of simplicity) allowing one to locally heat the waveguides and change the settings of the optical device.Legend: BBO: Beta-Barium Borate crystal; APD: Avalanche photo-diode detector; TDC Time-to-digital converter. showed that noncontextual models cannot explain η for arbitrary beam-splitting ratios, and that there exists a quantifiable gap between the efficiencies achievable by quantum theory and noncontextual models.We provide a more robust discussion of this result in Supplementary Notes 2, 3 and 4, where we model the noise-resistance of the contextual advantage result, describing also related loopholes for testing contextuality of the obtained data.In the remaining of the certification we will solely focus on nonclassicality provided by setcoherence. Violations of the inequalities of Ref. [19] are a promising, scalable and efficient way to witness coherence inside multi-mode interferometers, as described in Fig. 1b (see also Supplementary Note 1).A multipath interferometer corresponds to an efficient device for generating high-dimensional coherent states and measuring their two-state overlaps.Consider any two states, |0⟩ over some finite-dimensional Hilbert space in which |0⟩ is one state of a given basis.Their overlap can be measured choosing the two stages of a generic interferometer such that (3) Using multi-mode devices it is possible to witness not only coherence, but coherence achievable only with qudits by violation of the following family of inequalities defined recursively: where the sequence starts with h 3 (r) = r 0,1 + r 0,2 − r 1,2 and the above equation defines inequalities for any integer n > 3. When n = 4 we have The inequality in Eq. ( 5) cannot be violated by a set of pure qubit states.With qutrits, it reaches violations up to 1/3.Hence, quantum violations of inequality (5) represent witnesses of both coherence and Hilbert space dimension higher than 2. We have numerically found the same behaviour for sets of pure quantum states for the family of h n inequalities (4) up to h 10 by maximizing over parameters describing up to 10 states.This is evidence that the family of inequalities (4) corresponds to simultaneous witnesses of coherence and dimension, achievable only by the device's capability of precisely preparing and measuring high-dimensional coherent states.We do not prove that these inequalities have this property for all values of n.Using semidefinite programming (SDP) techniques, we show that we can map the maximum possible values of the inequalities (4) to the solutions of a quadratic SDP.We then show that, for n up to n = 2 12 , sets of states with dimension d = n − 1 are capable of violating the inequalities h n , while no violation is obtained from states spanned by a basis of any lower dimension.In Supplementary Note 1 we present these theoretical results, and discuss the underlying assumptions for the dimensionality certification in detail. B. Experimental Implementation and Results Quantum interrogation and coherence witnesses are tested in heralded single-photon experiments, by means of the experimental setup shown in Fig. 2, composed of a singlephoton source based on parametric down-conversion, and a programmable integrated universal photonic processor (UPP) fabricated via femtosecond laser micro-machining (see Methods for more details).Let us now discuss the results obtained in the performed experiments. Coherence and contextuality in two-level systems In the previous section, we have provided the theoretical framework which derives families of coherence witnesses based on the evaluation of pairwise overlaps among states in a finite set.Some inequalities tailored to work as coherence witnesses can also witness quantum contextuality.We have already mentioned inequality (1) as one of such example.Furthermore, this inequality predicts an advantage for the efficiency in the task of quantum interrogation [19].We implement and test this task in a programmable MZI inside the integrated UPP.The experimental optical circuit is shown Fig. 3a.In our experiment, the absorbing object is modelled by a completely transparent beam splitter with reflectivity r B = sin θ B = 0 placed in one arm of the MZI.The MZI is calibrated so that the two beam splitters have the same beam-splitting ratio (θ 1 = θ 2 = θ), with a null internal phase (ϕ = 0).These conditions guarantee that a single photon injected in mode 0 will always come out of output 0 when the object is absent.The aim of this experiment is to estimate the efficiency η of detecting the presence of the object without it absorbing the photon, as defined in Eq. (2).In our scheme p succ corresponds to the fraction of single photons detected in mode 1, since this output is only possible when the object is present.The probability of absorption p abs is given by the fraction of photons detected in mode 2. In Fig. 3b we report the measurements of η for different values of the reflectivity r = sin θ of the two beam splitters.The theoretical curve is given by a quantum model of the MZI, whose performance is, in general, not achievable by any generalized noncontextual model (see also Supplementary Note 2), given by Our experimental data follows very well the predictions of the quantum model, showing not only that the device generates data that cannot be explained with noncontextual models, but also that it uses contextuality as a resource to achieve quantum-over-classical performance, quantified by the efficiency η.We observe that the largest deviations from the the- oretical curve appear for values of r close to 1.This discrepancy can be justified by taking into account the experimental imperfections in the apparatus (see Methods). In Supplementary Notes 2, 3 and 4 we present a detailed discussion about contextuality in MZIs, including an analysis of the requirements for witnessing contextuality when the device is used for the quantum interrogation task.In there, we pick a specific beam splitter configuration for the interrogation task, and show that we experimentally achieve η exp = 0.428 ± 0.006, while noncontextual models must have an efficiency η N C ≤ 0.410, even when benefiting from the effect of noise, which raises the noncontextual upper bound for η. In Supplementary Note 5 we also certify coherence in the MZI using a different, and novel inequality featuring a high level of violation by five symmetrical states on a great circle of the Bloch sphere. Coherence and dimension witnesses in higher dimensions Eq. (4) describes a family of inequalities that are tailored for certifying coherence in systems with dimension d > 2. We will refer to such inequalities as h n , where n is the number of states in the set whose overlaps we need to evaluate.They arise as inequalities obtained using the event graph approach [20], when we consider complete graphs K n .The presence of coherence in the states is witnessed when an inequality is violated, that is, when h n > 1.An important point to note is that the values h n (r) provide information regarding the coherence accessible only due to the dimension of the space.In fact, the h n inequalities are not violated by sets of states without coherence, nor by systems with dimension d < n − 1.For example, one-qubit states do not violate h 4 ≤ 1, qutrit states do not violate h 5 ≤ 1, and so on.The main result is that h n displays different maximum values according to the dimension of the system.This implies that the functionals h n (r) are not only dimension witnesses but also indicators of the dimension of the space. We tested firstly the effectiveness of h 4 , h 5 , and h 6 inequalities as coherence witnesses.In Fig. 4 we show the circuits to prepare and measure 3-, 4-and 5-mode qudits.In particular, the red part of the circuits for qutrit (Fig. 4a) and ququart (Fig. 4b) are universal state preparators when a single photon enters the device respectively in inputs 0 and 1.In the case of 5-mode qudits, the 6-mode UPP does not have enough layers of MZIs to implement independent universal preparation and measurement stations.However, the circuit in Fig. 4c can prepare a set of six 5-mode qudit states that maximize h 6 .In the figure, we also report the h n values together with the matrix of the pairwise overlaps.We estimated the violations by considering only the upper triangular part of such a matrix, i.e r i,j with i > j.Further details on the sets of states used to violate the inequalities, and a discussion on the sources of noise in the experimental measurements are reported in Supplementary Notes 6 and 7. We then move to the experimental test of h n as dimension witnesses by sampling uniformly random states spanned by bases of different dimensions.The distributions of the values obtained for the l.h.s. of the h n inequalities for d = 2, d = 3 and d = 4 are reported in Fig. 5, together with the maximum theoretical values of h n for systems of those dimensions.The experimental data confirm the theoretical pre- dictions.This provides further insight on the power of the h n as dimension witnesses.In fact, the uniform sampling of the states tries to answer the following question: how much information about the dimension of the space can we retrieve from the value of the l.h.s. of the h n inequalities, without knowing the optimal set that maximizes the violation?We did not sample random states in d = 5 since our circuits are not universal state preparators and measurement devices for qudits of this dimension.Table I summarizes the maximum values of the functionals h n obtained in the random sampling and in the previous analysis dedicated to coherence witnesses.We see that the maximum violations are not typically achievable when sampling random states.However, the h n become very effective in discerning systems of d > 2 for increasing values of n. In summary, we showed how to exploit new families of overlap inequalities to witness coherence in qudit systems.The coherence is certified when h n > 1.Furthermore, the h n inequalities introduced in Ref. [20] and tested here are only violated by systems having both coherence and a dimension d ≥ n − 1.Even when h n < 1, while not witnessing coherence, the value of h n still provides information on the dimension. b) c) a) DISCUSSION In this work, we have characterized how quantum information is processed within a six-mode programmable integrated UPP.To do so, we witnessed -and used -two different notions of quantum nonclassicality, namely generalized contextuality, and coherence.Our characterization of nonclassicality is done in a way that depends on the dimension of the Hilbert space generated by single-photon interference through the paths of the programmable device.Our analysis begins with the simplest scenario, where we use a subsection of the device to implement a two-mode Mach-Zehnder interferometer MZI.We demonstrate the presence of generalized contextuality within the MZI by violation of a recently introduced novel generalized noncontextuality inequality.We show that this resource is used to achieve efficiencies in the task of quantum interrogation that are higher than those possible by any noncontextual model that reproduces the same operational constraints considered in our experiment. Then we proceeded to investigate nonclassicality generated by a single photon able to propagate through gradually larger portions of the interferometer.In doing so, we introduce a novel theoretical perspective in coherence theory: quantum coherence achievable only by qudits.We show that a family of inequalities is capable of identifying coherence that can only be witnessed when the totality of the Hilbert space dimension considered is used in non-trivial ways.We experimentally measure the presence of this kind of coherence for a single photon interfering in up to 5 modes.Via numerical simulations, we demonstrate that this family of inequalities is a simultaneous witness of coherence and Hilbert space dimension d up to d = 2 12 . Our certification scheme leaves some opportunities for further theoretical investigation.For instance, while our scheme is not device independent, we believe that in the future it may be suitable for a description in the semi-device independent framework, since our single requirement over the data is that it corresponds to two-state overlaps, possibly interpreted as a promise over the possible measurements.Quantum states, Hilbert spaces and physical devices are arbitrary.Also, due to the invariant properties of the inequalities, their maximization will likely be related to the task of self-testing [47], and techniques used there might be applicable in our case. We believe that violation of these inequalities can be exploited in the future as a novel certification technique benchmarking nontrivial high-dimensional coherence, and that may be related to hardness of quantum computation.Moreover, the theoretical results presented here apply to any platform for quantum computation, and not just photonics. Experimental setup A BBO crystal is pumped by a pulsed-laser at the wavelength of λ = 392.5 nm.The spontaneous parametric down conversion (SPDC) process generates photon pairs at λ = 785 nm.In the experiment we focus on one pair emissions, much more likely to happen than multi-pair generation. One of the two photons is employed as a trigger signal while the other one is injected in a universal photonic processor (UPP), i.e. a fully-programmable multi-mode interferometer.This device consists in a waveguide circuit, fabricated in-house by the femtosecond laser writing technology in an alumino-borosilicate glass chip.The scheme of the photonic circuit of the six-mode UPP is reported in Fig. 2c and follows the decomposition into a rectangular network of beam splitters and phase shifters devised in [48].The beam splitters in the scheme are actually MZIs (see inset in Fig. 2c), which provide variable splitting ratios depending on the value of the internal phase.Dynamic reconfiguration of the UPP operation is accomplished by thermo-optic phase shifters, which enables the active control of the values of the phase terms placed inside and outside the cascaded MZIs.The input and output ports of the waveguide circuit are optically connected via fiber arrays (single mode fibers at the input, multi-mode fibers at the output, see also Fig. 2b). In particular, the phase shifters are based on gold microheaters, deposited and patterned on the chip surface.Upon driving suitable currents into the microheaters, local heating of the substrate is achieved in a precise and controlled way.Such local heating induces in turn a refractive index change and thus controlled phase delays in the waveguides due to the thermo-optic effect.A careful calibration of the phase shifters allows to implement with our UPP any linear unitary transformation of the input optical modes.Such calibration is performed by classical coherent light and does not rely on the quantum theory we test in our experiments.More details on the design, fabrication and calibration process of the UPP are provided in Supplementary Note 8. Finally, the outputs and the trigger photon fibers are connected to avalanche-photodiode single-photon detectors (APDs).The detector signals are processed by a time-todigital converter (TDC) that counts the two-fold coincidence between the chip outputs and the trigger photon. Noise model for the quantum interrogation experiment The main sources of noise that need to be considered for the quantum interrogation experiment are mismatches in the reflectivity r of the two beam splitters, as well as dark counts of the detectors.These become more significant for r ∼ 1, since in this regime both p succ and p abs tend to be very small.Our noise-corrected formula for η will be where ε is the percentage mismatch of the two reflectivities and n 1 and n 2 indicate the ratio between dark counts and signal. The red area in Fig. 3b encloses the set of curves resulting from a span of the parameters ε, n 1 and n 2 in the range 0 and 0.005.Our noise model predicts large deviations from the ideal quantum efficiency when the beam-splitting ratio approaches r = 1. Pairwise overlap inequalities characterizing incoherent sets of states We will briefly recall the arguments from Refs.[17,19] for why the selected inequalities are capable of bounding the overlaps of sets of incoherent states, and how a graphtheoretic construction enables finding such inequalities. For a diagonal set of states, with respect to any basis, the two-state overlaps Tr(ρ i ρ j ) of the elements {ρ i } i of that set represent the probability of obtaining equal outcomes from the states upon measurements with respect to the reference basis.When such an interpretation is possible, we say that the set of states is coherence-free, or incoherent, and the reference basis is a coherence-free model for this set.For each set of states ρ it is possible to define an edge-weighted graph (G, r) with vertices of the graph V (G) representing quantum states and edges E(G) having weights r e ≡ r i,j := Tr(ρ i ρ j ). If we collect all weights into a tuple r = (r e ) e ∈ R |E(G)| with |E(G)| the total number of edges in the (finite) graph G, it is possible to bound all tuples r resulting from states that are diagonal with respect to some basis.This was studied in Ref. [17,20], and the bounds are given by linear inequalities.Similar to what is already well established in Bell nonlocality [11] and contextuality [12], inequalities bound descriptions of classicality; those inequalities are built to bound coherence-free models for ρ. The basic reasoning described in its most general form starts by acknowledging that, if all states are incoherent, there exists some set of output outcomes Ω with respect to which the weights r i,j represent the probability that, upon independently measuring ρ i , ρ j from adjacent nodes i, j in the graph G we obtain equal outcomes.As an example, consider two adjacent nodes described by the maximally mixed qubit state I 2 /2.In this case the edge-weight corresponds to the two-state overlap Tr(I 2 /4) = 1/2.This is also the probability that we measure these two states with respect to the basis that diagonalizes them, and obtain equal outcomes, i.e., the probability that two ideal coins return equal outcomes. To find overlap inequalities the algorithm then goes as follows: for a given graph G we have all conceivable tuples r = (r e ) e described by all assignments 0 or 1.Any tuple of two-state overlaps will be inside the polytope described by the hypercube [0, 1] |E(G)| .Using the assumption of incoherent states and what this forces overlaps to satisfy, i.e., to be the probability of equal outcomes with respect to some set of labels, one can forbid some assignments from the hypercube.The remaining ones are just those possible from an incoherent interpretation of states and edge-weights.In convex geometry, the convex hull of this set of assignments defines a polytope, and using standard tools it is possible to find the facetdefining inequalities for this polytope, given that the vertices are known.These facet-defining inequalities are the inequalities we probe in our work.By construction, inequality violations immediately contradict the hypothesis of incoherent states, hence serving as witnesses of set-coherence. In this work, the graphs considered are only complete graphs K n , a graph where every is connected to every other node, for n = 3, 4, 5, 6.The label n describes the number of nodes for the graph.The inequalities described by Eq. ( 4) form one among many inequalities that can be derived for the graph K n , having the particularly interesting properties we discuss in the main text.In the Supplemental Material we describe a different inequality from the same graph that has a large violation that can be violated with qubits.For a more detailed description we refer to Ref. [20]. Standard quantum interrogation task For the interrogation task one assumes that there might be some object in one of the interferometer's arms, depicted as a question mark in Fig. 1a.The object, if present, is assumed to completely absorb incoming photons, and the task is to detect the presence of the object without any photons being absorbed by the object.It is somewhat surprising that such a task can be accomplished at all, but using the fact that beam splitters create coherence, a simple set-up is capable of performing the task.Letting two 50:50 beam splitters and no difference in phase between the arms, in case there is no object all the coherence created in the preparation stage is destroyed in the measurement stage, and one of the detectors never lights up.However, in the presence of an object, it acts as a complete path-information measurement device inside the interferometer, projecting the state of the photon inside the interferometer to the arm where the object is not present.This happens with 50% probability.When the non-absorbed photon hits the sec-ond beam splitter in the measurement stage there is a 50% chance that the detector that would be dark in the absence of an object now lights up.Therefore, with probability 25% one can detect the presence of the object without directly interacting with it.Surprisingly, there are noncontextual models capable of reproducing precisely this feature [14]. We can vary the beam splitting ratios for the protocol, which allows us to have an improvement in the efficiency of the task.It is crucial that the second beam splitter in the MZI perfectly reverses the first beam splitter's action, so that whenever we have no object inside the interferometer, or we have an inactive object, one of the detectors never clicks (remaining dark).The probabilities p succ and p abs depend on the beamsplitting ratio characterizing the beam splitter.The probability p abs is simply the probability that once the photon enters the device it goes to the arm that has the object.The probability p succ is the probability that the photon does not reach the object, hence going to the other arm, and leaves the MZI via the output port that is dark in the absence of the object. i=1 be any 4-tuple of pure quantum states of dimension 2.Then, letting r i,j Proof.We want to maximize h 4 for qubit states.The general form for this maximization procedure for any possible state is given by, max ρ1,ρ2,ρ3,ρ4∈D(H) where D(H) is the set of all density matrices over H ≃ C 2 .In fact, if we consider only the maximization with respect to ρ 1 first, we see that ρ 1 appears only in the first 3 overlaps.This allows us to use the following relation, where in the first equality we have used linearity of the trace and for the second inequality, we define ∥ • ∥ is the operator norm, where ∥ • ∥ H is the norm that makes H a Hilbert space, i.e., the norm arising from the inner product.In particular, because the sum of positive semidefinite matrices is again positive semidefinite i ρ i is positive semidefinite, and the inequality is tight meaning that there is a state ρ 1 for any given ρ 2 , ρ 3 , ρ 4 such that the equality holds.Therefore we end up translating our problem into and we proceed to study the quantity ∥ρ 2 + ρ 3 + ρ 4 ∥.Due to the invariant nature of the overlap scenarios we might choose ρ 2 , ρ 3 , ρ 4 to be the density matrices related to the pure states |0⟩, |θ⟩, |α, φ⟩ defined by, and with no loss of generality.This implies that we will have a relation dependent only on 3 parameters to investigate θ, α ∈ [0, π/2] and φ ∈ [0, 2π]. Recall the following theorem, Theorem 2. Let H be a Hilbert space.Then, for any operator A ∈ B(H), where B(H) is the set of bounded operators with domain and codomain in the space, then we have that, where ∥ • ∥ is the operator norm, and σ(X) is the spectrum of operator X. We conclude that pure qubit states cannot violate the h 4 inequality. The above theorem shows that the h 4 inequality is a witness of both coherence and dimension.We now proceed to numerically investigate if the same property holds for the family of inequalities h n described in the main text.This numerical investigation was performed by maximising the value h n (r) achieved for r = (|⟨ψ i |ψ j ⟩| 2 ) i,j generic pure state overlaps, using maximisation functions built in the Mathematica language, specifically, NMaximize.We also tested if (n − 2)-dimensional states could violate the inequalities h n (r) ≤ 1 by randomly generating sets of quantum states from the uniform, Haar measure.For h 6 (r) ≤ 1 we tested 10 10 sets of 6 samples of ququarts and never violated the inequality.Both sampling sets of states and numerical tools finding local maxima for the functions h n (r) this property holds, i.e., no set of n quantum states over an (n − 2)-dimensional Hilbert space could violate the h n (r) ≤ 1 inequalities. A. Probing the property of dimension witnessing with quadratic semidefinite programming Let us start by showing that maximizing the inequalities over pure states is sufficient, showing that our tests are robust, in the sense that the assumption of state purity is not needed for the dimension witnesses we obtained.Let us start by introducing some notation.We say that a given overlap tuple r = (r e ) e , with respect to some graph G, has a quantum realization if there exists some Hilbert space H and some {ρ i } i such that r e ≡ r ij = Tr(ρ i ρ j ).We denote a specific realization of r as r({ρ i } i ).Note that overlap tuples may have many realizations, such as (1, 1, 1) in G = C 3 , or none, such as (1, 0, 1) in the same graph.For each realization r({ρ i } i ), we denote h n (r({ρ i } i )) as the value that the functional h n has with respect to the overlap values r with realization r({ρ i } i ).Once more, a given value of h n (r) can be achieved with various quantum realizations of r({ρ i } i ). We can collectively write s = (ω 1 , . . ., ω m ) and define q s = λ (1) ωm .Because each set of weights {λ ωi } ωi∈Ωi corresponds to convex weights, i.e., ωi λ (i) ωi = 1 with 0 ≤ λ ωi ≤ 1 we get that {q s } s is also a set of convex weights.With this simplified notation we have that Eq. ( 11) becomes with s q s = 1 and 0 ≤ q s ≤ 1.In words, the linear-functional h n realized by overlaps between general quantum states can be written as the convex combination of the same functional realized by overlaps between pure states.Choosing now a particular ensemble s ⋆ such that ∀s, h n (r({ψ Since s q s = 1 we have that h n (r({ρ i } i )) ≤ h n (r({ψ Our goal now is to push the numerical results of Table S1 for instances of K n for larger n using techniques of semidefinite programming in order to see if there is the possibility of probing the dimension witness nature of the family of inequalities for larger complete graphs.The major drawback with respect to the maximization over parameters, as we will see, is that we cannot read the states that realize the given violation from the final result.In order to show that h n are dimension witnesses for higher values of n, we show how to re-write the expression defining this family of inequalities into an SDP.We start with two simple lemmas, the first taken from Ref. [1].Lemma 4. Let h + n (r) be the sum of all overlaps for a graph K n .Assume also that the quantum realization of the graph is given by the set of states {|ψ i ⟩} n i=1 .Then, we have that where Noting that X 2 is given by, where we have used the fact that r i,j = r j,i .Now, note that in fact h + n (r) = i<j r i,j .From this we have our relation by inverting the last equation, as we wanted to show. We may also note that, due to the recurrence description of h n (r) it is possible to use only h + n (r) to describe the inequality.Lemma 5.The inequality functional h n (r) is equal to h n (r) = h + n (r) − 2h + n−1 (r).Proof.As we have that We can see that, as we wanted to show. For the sake of clarity, let's consider an example of the above description for the lemma.Recalling that h + 3 (r) = r 1,2 + r 1,3 + r 2,3 then, This is equivalent to h 4 (r) up to the relabel 4 → 1 and 1 → 4. We can use the lemmas above to write h n (r) only in terms of X, which eventually lead a semidefinite programming description of the problem of maximizing h n (r) for finding quantum violations.Theorem 6.The quantum realizations of the inequality functional h n (r) can be expressed as a quadratic semidefinite program (SDP) optimized over quantum states X.The resulting optimized values provide upper bounds for the values of h n (r) that can be reached with quantum theory for any d.. Proof.From lemma 5 and lemma 4 together we can write h n (r) as, where we have that X In other words, the second term of h n lacks |ψ n ⟩⟨ψ n |.It is clear that we may write X in terms of X ⋆ provided that we fix |ψ n ⟩.As h n (r) is a projective-unitarily invariant functional of a set of states we can unitarily transform any set of states such that |ψ n ⟩ = |0⟩ ∈ H ≃ C d is the reference d-dimensional canonical basis vector of C d .In this case, we have that which implies that, This last expression allows us to write h n (r) in terms of X ⋆ and |0⟩⟨0| only. We can then write the following quadratic SDP problem, for any 2 ≤ d ≤ n − 1 as X ⋆ lives in the span of {|ψ i ⟩⟨ψ i |} n i=1 .Above we simply set, A n := −(n − 1) 2 /2, B n := (n − 1) and C n = (n − 1)/2.We also use the common description of the trace inner product Tr(A † B) = ⟨A, B⟩. This theorem immediately shows that we may find upper bounds, for any dimension d, on the maximum values of h n (r) from a set of states {|ψ i ⟩} n i=1 , as the set of all matrices of the form X ⋆ = 1 n−1 n−1 i=1 |ψ i ⟩⟨ψ i | is only a subset of all possible density matrices.The theorem also shows that it is unnecessary to search for dimensions d > n − 1 as the problem is defined for matrices X ⋆ that are inside the span of {|ψ i ⟩} n−1 i=1 , which is at most (n − 1) dimensional.The interesting aspect of transposing the problem into an SDP is that the problem becomes computationally efficient, polynomial in the memory and computational resources [2,3].In figure S2 we implement the above problem and find solutions for n up to 800.Such a gain allows us to study upper bounds of quantum violations for all integers 4 ≤ n ≤ 800.The main idea for these simulations is to provide strong numerical evidence that the family K n is a dimension witness for all possible values of n ≥ 4. As this might be interesting for when the Hilbert space dimension grows exponentially, we have also considered the case of dimension d = 2 q where q represents number of qubits.We show numerically that 2 q − 2 dimensional states cannot violate inequalities h 2 q (r) ≤ 1, up to q = 12.The results are shown in Fig. S3.As our quadratic SDP is well behaved, we have used interior point methods, using solvers available in CVXPY. We used the Splitting Cone Solver (SCS) and Python Software for Convex Optimization (CVXOPT), and both converged to the same values up to numerically instabilities.Results presented are those for SCS only.We have used these solvers since they are open-source.While not devoted to quadratic SDP optimization, they have converged fast for all points considered.As these are convex optimization tools applied to a quadratic optimization problem, we are not guaranteed that the results are tight; still we managed to find (local) optimal values that agreed with other methods (NMaximize from Mathematica) and our experimental implementations, providing further evidence that our dimension witnesses hold significantly beyond the regimes we have tested experimentally.The SDP code may be found in Ref. [4]. . Numerically testing dimension and coherence witness using semidefinite programming for qubits.We consider the same simulations from Fig. S2 but using qubits, and therefore going to exponentially large Hilbert space dimensions.We see that hn remains a dimension witness for d up to 2 12 .In (a) we see that there is a saturation of 1.5, and in (b) we highlight the changes with a higher precision. B. Assumptions for the task of dimension witnessing with overlap inequalities Certification tasks can depend on the underlying assumptions made on the data generated by quantum devices, and on some promises on the experimental implementation or on the device itself.For a review on different certification protocols, and the most well-known assumptions we refer the reader to Ref. [5]. The restriction that noncontextuality [27] imposes over such models is that operationally equivalent procedures P 1 ≃ P 2 must be represented equally by the model: µ(λ|P 1 ) = µ(λ|P 2 ), ∀λ ∈ Λ. Similarly for the measurement effects, Intuitively, the model explains that one cannot distinguish operationally equivalent procedures P 1 ≃ P 2 simply because they correspond to the exact same ontological counterparts µ( In order to test generalized contextuality, we first consider the approach that characterizes prepare-and-measure scenarios described by fragments of finite sets of operational elements in a theory.We shall refer to it as the algebraic approach [29], or equivalently the inequalities approach.For a given finite set of operational elements, it is possible (but generally hard) to find the complete set of generalized noncontextuality inequalities bounding noncontextual explanations for such scenarios [30].Using semidefinite program (SDP) techniques it is also possible to bound the set of quantum correlations [31,32]. As was shown in Refs.[6,33], the inequalities associated with graph K 3 , constituting the class r 1,2 + r 1,3 − r 2,3 ≤ 1, can be mapped into robust generalized noncontextuality inequalities of specific prepare-and-measure scenarios.Formally, the prepare and measure scenario, in its robust form, is described as one in which there are 6 preparation procedures satisfying and three other measurement procedures, for which we assume no operational equivalences, but that satisfy Above we use the notation p(0|M i , P j ) ≡ p(M i |P j ).For our experiment, p(M i |P j ) = r i,j .For such a scenario it is possible to map the K 3 inequality into a robust noncontextuality inequality of the form The quantities ε i theoretically capture the fact that, in a real experiment, the measurements M i do not perfectly discriminate between P i and P i ⊥ .Such imperfections might occur operationally in case M i , P i or P i ⊥ do not correspond to the ideal intended procedures.This last inequality allows us to analyse the claim of contextual advantage for quantum interrogation, from the algebraic perspective.We will do this in the following, showing that our scenario satisfies the operational constraints considered and violates the noncontextuality inequality above. Supplementary Note 3. Robust account of contextuality in quantum interrogation Let us start by analysing the robustness of the effects of contextuality in the interrogation task made in Ref. [6] due to depolarizing noise.Quantum interrogation assumes the possibility of preparing six different quantum states and measuring over all of these states as well, where the states relevant for the quantum interrogation correspond to those generated by the beam splitter, States |θ⟩ result from unitary operations implemented by the first beam splitter in Fig. 3(a) in the main text, when single photons are input in mode 0. The states | − θ⟩ correspond to the rotation that perfectly destroys the interference generated in the first beam splitter, described as unitary rotations generated by the second beam splitter in Fig. 3(a), while the third beam splitter in this figure represents the bomb in the interrogation scheme.The programmability of the device allows for such a freedom in the preparation of those states.Antipodal states can be equally prepared by inputting a photon in the second arm of the MZI.The programmability of the device allows that each of those states can be prepared and all the measurements M 0 := {|0⟩⟨0|, |1⟩⟨1|}, M θ := {|θ⟩⟨θ|, |θ ⊥ ⟩⟨θ ⊥ |, } can also be performed by programming the beam splitters accordingly and gathering statistics from counts in the APDs. The trade-off between what can be achieved with quantum theory and any noncontextual model is described by the noncontextuality inequality related to the prepare-and-measure scenario described before.Noise affects the violations of the inequalities in two different ways: first, it increases the noncontextual bound on the right, and second, it decreases the values of the statistics.These two factors suggest that noise is very detrimental to proofs of contextuality that consider specific (target) equivalences.This is common in proofs of contextual advantage, c.f Refs.[13,Fig. 3,pg. 7], and [9,Fig. 7,pg. 11]. Recalling that, operationally, the efficiency of the task is characterized by the quantity η = p(1|M 0 , P −θ )p(0|M 0 , P θ ) p(1|M 0 , P −θ )p(0|M 0 , P θ ) + p(1|M 0 , P θ ) where p(0|M 0 , P θ ) corresponds to the probability that, upon the measurement M 0 ('which-way measurement' deciding the path that the single photon took) the photon is found in the mode corresponding to state |0⟩, similarly p(1|M 0 , P θ ) to the one in which the photon is found in the complementary mode (and is interpreted as the 'bomb' exploding in the Elitzur-Vaidman experiment) while p(1|M 0 , P −θ ) corresponds to the probability that a which way measurement finds a photon in mode |1⟩ upon the preparation P −θ .Note that imposing the symmetry p(1|M 0 , P −θ ) = p(1|M 0 , P θ ) simplifies the equation into η = p(0|M 0 , P θ ) p(0|M 0 , P θ ) + 1 . The strategy from now on will be to use the theoretical description of the quantum experiment in terms of noisy quantum states affected by the channel D ν .We therefore define, for each state |ψ i ⟩⟨ψ i |, a state described by the action of a depolarizing channel D ν with I d a d × d identity matrix, implying that I 2 /2 corresponds to the maximally mixed qubit state.The factor ν represents the amount of noise.We map measurement effects 0|M 0 → D ν (|0⟩⟨0|), 1|M 0 → D ν (|1⟩⟨1|) and similarly for all θ, 0|M θ → D ν (|θ⟩⟨θ|) 1 .The same for the preparations.Therefore, we have that the optimal quantum strategy obtained under the effect of a channel D ν now reads where for simplicity we write ρ s ≡ D ν (|s⟩⟨s|).For the noncontextuality bound, we can see that ε 0 , ε θ and ε −θ will be functions of ν, and it is easy to see that In such a way, we have that the robust noncontextual bound provided by the prepare-and-measure noncontextuality inequality will be Whenever η > η N C , we observe an advantage provided by quantum contextuality in the interrogation task.The function η N C (θ, ν) robustly characterizes the validity of the no-go result, and not necessarily the actual existence of a noncontextual model that reproduces the data.In Fig. S4(a) we plot both the efficiency that can be achieved with quantum theory (blue) η and the noncontextual bound (pink) η N C as a function of the parameter θ that characterizes the transmissivity of the beam splitters in the MZI, and the amount of noise ν captured by the channel from Eq. ( 21).The curve in which the two meet characterizes the degree of noise ν for which no advantage can be claimed.In Fig. S4(b) we plot the robustness to the noise parameter ν as a function of θ as described by the operational prepare-and-measure scenario considered.In this case, for ν > 0.057 we would lose the gains guaranteed by contextuality in the protocol. From the analysis in Fig. S4, it is clear that for the operational prepare-and-measure (PM) scenario considered, the efficiency of quantum interrogation is fairly sensitive to depolarising noise.Each fixed value of θ characterizes a specific PM scenario.We can analyse if our results corroborate the hypothesis that the experimentally observed efficiency is higher than those achieved by noncontextual models, considering noise in the device.In order to do so, we study contextuality in the device for the case of Robustness is captured by the degree of depolarizing noise captured by a parameter ν from the channel (1 − ν)ρ + νI/2, for any state ρ prepared inside the interferometer.The purple triangles corresponds to the efficiencies η = 1/3 and η = 1/2 achieved using the toy model from Ref. [34].The blue star corresponds to a choice of θ = 5π/6 that can be translated into a proof of contextuality as originally presented in Ref. [27], and that achieves η(5π/6, 0) = 0.428.If ν > 0.057 the gap is lost and it is impossible to conclude contextual advantage for the task.(b) Curve marking, for each value of θ, the noise level for which any claim of contextual advantage is lost.Visually, this curve marks the intersection of the two curved regions present in (a). values r = 0.75, where θ = 5π/6.In what follows, we will first experimentally test if the operational equivalences that the MZI states should satisfy are reproduced by the device within experimental error, and then use these states to calculate the value of the noncontextuality inequality Eq. (20).A violation will robustly indicate that the device is witnessing generalized contextuality within the so-called algebraic/inequalities approach.We will later use a different approach, the so called geometric/general probabilistic theories (GPT) approach [35,36], to obtain improved results for robustness to depolarizing noise, and obtain experimental evidence of contextuality also from the GPT perspective.We finish the discussion by showing that the efficiency experimentally achieved in the quantum interrogation task cannot be achieved with noncontextual models, and comment on open loopholes for our test. Supplementary Note 4. Loophole analysis and benchmarking contextuality with linear programming tools A. Generalized contextuality from the algebraic approach: inequality violation We have performed quantum tomography for the states that are generated by both beam splitters inside the MZI used in the quantum interrogation test, namely S ideal := {|0⟩, |θ⟩, | − θ⟩, |1⟩, |θ ⊥ ⟩, | − θ ⊥ ⟩} that correspond to density matrices S := {ρ 0 , ρ 1 , ρ 2 , ρ 3 , ρ 4 , ρ 5 }, for the value θ = 5π/6 resulting in states in an hexagon of the rebit subspace of the Bloch sphere.The resulting states are depicted in Fig S5 .As one of the assumptions present in the description of the scenario, and relevant for the derivation of inequality (20), is the assumption that operationally the states satisfy the operational equivalences ρ 0 + ρ 3 = ρ 1 + ρ 4 = ρ 2 + ρ 5 .We probe these equivalences by performing quantum state tomography and show that these are correct up to experimental error.In this case, the noncontextuality inequality takes the form Using the MZI to probe this specific inequality we obtain h The violation of this inequality is a robust witness of quantum generalized contextuality. Despite the fact that we are able to witness generalized contextuality using the above inequality, the no-go result for the gain in the efficiency η compared to noncontextual models is more sensitive to noise.In order to improve this gain, we consider a state-of-the-art approach to witnessing contextuality that uses general probabilistic theories (GPTs), as we now discuss.we present the results of the quantum state tomography, as well as confirmation that those states satisfy the operational equivalences Eq. ( 18) within experimental error. B. Generalized contextuality from the geometrical approach: Lack of simplex embeddability In order to improve the robustness of depolarizing noise for the specific PM scenario we can consider the GPT/geometric framework.The proof of contextual advantage from Ref. [6] relies on a specific characterization of PM scenarios using specific operational equivalences.In our experiment, these equivalences correspond to those present in Eq. ( 18), and we have probed their validity within experimental uncertainties.Due to the specification of target equivalences, the robustness to noise in our scenario is smaller than what would be possible in case one considers all possible operational equivalences satisfied by quantum theory viewed as a GPT, instead of an operational-probabilistic theory. We now use the framework studied in Refs.[24][25][26] to specify the accessible GPT fragment of quantum theory and probe its robustness to noise.We will consider that the reader is familiar with the GPT framework, and we refer to the introductory Refs.[37][38][39][40].In this perspective, we consider quantum theory as a GPT, in which case quantum states and measurement effects are considered GPT states and GPT effects. Remarkably, the geometry approach greatly simplifies the minimal requirements to test contextuality, which corresponds to probing the operational equivalences as it considers, by default, all the possible equivalences present in the accessible fragment described by a GPT.This fragment is described simply by: 1) a set of quantum states, 2) a set of measurement effects, 3) a unit effect of the accessible GPT, that in quantum theory simply corresponds to the identity and 4) a maximally mixed state.When one assumes quantum theory, 3 and 4 are assumed to be both the identity and the maximally mixed state I d /d.In order to probe generalized contextuality in such a framework, one must test if the accessible fragment just described allows for being embedded in some classical GPT, that is a simplicial GPT defined in a possibly higher dimensional GPT space.It was shown that this problem can be written as a linear program [24,25,41]. For the set of effects, we choose the set of tomographically complete measurements over the qubit space M := {X, Y, Z}.The pair A ≡ (S, M) completely characterizes the accessible GPT we investigate.The linear program from Ref. [25] returns robustness to depolarizing noise for the fragment A and it is given by ν(A) = 0.1121.Two comments are necessary: first, this robustness is considered with respect to A, and hence with respect to the noisy states, as opposed to the robustness considered before with respect to the ideal states.If we consider the robustness with respect to the ideal states using A ideal = (S ideal , S † ideal ) we have that ν(A ideal ) = 0.333, where S † ideal are the same elements of S ideal , now viewed as accessible effects.Second, for the quantum interrogation, we assume that the errors in the generation of coherence from the beam splitter are significantly higher than the errors in the detection apparatus.Finally, we see that robustness to depolarizing noise is much larger when one considers the GPT perspective as many more equivalences are used, instead of simply considering those from Eq. 18, as expected. Because we have obtained a value ν(A) = 0.1121 for the fragment using the states considered in Fig. S5, i.e., experimentally probed, this is once more a witness of generalized contextuality in our device, provided now by the geometric approach. C. The role of contextuality in the efficiency of η Let us compare the efficiency found experimentally with the smallest possible noisy efficiency η for the quantum interrogation that still allows for a claim of quantum advantage.We consider the specific value of efficiency η reached using the states from Fig. S5.The results are presented in Table S2.S2.Efficiency of the quantum interrogation task.Analysis of the efficiency η of quantum interrogation with respect to the states tomographically characterized in Fig. S5.We consider the ideal quantum efficiency obtainable with the ideal set of states/effects for the PM scenario, and its robust counterpart, in the two first rows.Third row corresponds to the analysis for the GPT approach.Robustness to depolarization is calculated, as well as its effect on the efficiency η as described with Eq. ( 22).Efficiencies higher than those in the third column cannot be explained with noncontextual models. We note that due to sensitivity of η to noise ν, the algebraic approach allows noncontextual models to reproduce the efficiency close to the one experimentally found, even though smaller within experimental error.Nevertheless, to better observe that the quantum efficiency found cannot be reproduced by noncontextual models for the fragment of quantum theory considered, we can use the GPT/geometrical approach.The last row in Table S2 presents the efficiency η found experimentally, the robustness ν obtained using the linear program from Ref. [25] for the quantum states characterized by tomography (see Fig. S5) and the smallest value for which any quantum efficiency would still incur into quantum advantage for that particular noise level.The other rows in Table S2 correspond to numerical bounds comparing optimal quantum results, the noise level that would destroy the quantum advantage argument and the corresponding quantum efficiency for that noise. A noncontextual model would be able to reproduce the data, and hence the efficiency of the task, only in the case that: (i) the effective noise model cannot be mapped as a depolarizing channel, neither be considered as a worse-case noise model through robustnesss to depolarization, implying that we cannot use the program from Ref. [25], (ii) the noncontextual model is somewhat conspiratorial, using precisely some non-trivial aspects of the noise to pass the test of non-embeddability characterized by robustness to depolarization, while being simplex-embeddable, for the states in Fig. S5 (iii) the noncontextual model is somewhat conspiratorial in a different sense, using the fact that we do not perfectly match the operational equivalences, but only respect them within small experimental errors, and uses this fact to pass the tests considered, i.e., inequality violation and linear-program, and reproducing the high efficiency found, (iv) the states found in Fig. S5 and those used to perform the interrogation task are not the same, even though they were probed within the exact same system.Such situations are highly discredited.In summary, Table S2 corroborates the hypothesis that we have experimentally witnessed efficiencies for the quantum interrogation task that could never be reached by noncontextual ontological models.The last row of the table shows the actual experimental results, for the efficiency obtained experimentally and for the robustness to depolarization obtained from quantum state tomography. D. Loopholes We make a small note on the loopholes associated with our test of contextuality.It is well established that generalized contextuality resolves various issues regarding common loopholes in experimental tests of no-go results.Among experimental aspects that constitute loopholes for testing the standard notion of Kochen-Specker contextuality or Bell's notion of local causality are: 1) sharpness of measurements, 2) statistical independence, 3) photon detection and 4) freedom of choice.None of those constitute loopholes for testing generalized contextuality. However, some loopholes persist in our experiment, these are: (a) Assumption of quantum theory.General tests of contextuality should, in principle, be theory-independent.Some might think that it is necessary to probe contextuality without assuming that quantum theory corresponds to the underlying GPT.(b) Robustness analysis beyond depolarizing noise.It is likely that, considering resource theoretic arguments, depolarizing noise corresponds to the most detrimental type of noise to test generalized contextuality.For instance, Ref. [26] showed proofs of contextuality with arbitrarily large dephasing noise.The tools developed in Ref. [25] do not allow for considering generic experimental imprecision's, i.e., uncertainty in the states and effects. (c) Tomographic completeness.Tests of generalized contextuality need the assumption that the fragment (or the operational equivalences) has tomographically complete sets of operations.We do not discuss this loophole here.We have assumed a tomographically complete set of measurements. It is worth mentioning the following.We do not use the methodology of secondary procedures, as described in Refs.[9,17] for imposing the symmetries of the operational PM scenario.However, the only symmetry used for violating Eq. ( 25) was that r i,j = r j,i .We study this in depth for 4-dimensional states in Appendix 7.For qubit violations, this source of error is smaller than what would be necessary to lose the violation.Note that this is not a loophole for the analysis using the GPT approach.Therefore, this is not a loophole in our test for contextuality overall, and only of our considerations using the operational PM scenarios. Supplementary Note 5. Coherence witnesses tailored for two-level systems An extensive study of all the possible inequalities bounding the coherence-free sets of overlaps of up to 6 states was done in Ref. [33].Among these, some inequalities stand out due to their conceptual relevance and robustness to noise.For instance, as was proposed in Ref. [6], the following inequality witnesses coherence inside Mach-Zehnder interferometers (MZIs) , in a robust and efficient way (see Fig. S6a for a description of this set-up): It was shown that this inequality has a large quantum violation, ∼ 0.795, and it applies to the common scenario of an MZI with balanced beam splitters, using 10 different choices of internal phase shifters. Our experimental investigation regards such a coherence witness tailored for two-dimensional systems (qubits), reported in the inequality (26).We remind that the test requires the estimation of overlaps r i,j = |⟨ψ j |ψ i ⟩| 2 .Hence, we prepare the programmable UPP to implement the scheme depicted in Fig. S6 (a).We consider only two modes that individuate the MZI sketched in the figure.In the preparation stage, highlighted in red, we inject a photon in the modes 0 to prepare qubit state |ψ i ⟩ = cos θ i |0⟩ + e iϕi sin θ i |1⟩ by programming the two angles ϕ 1 = ϕ i and θ 1 = θ i .Conversely, in the measurement stage, highlighted in blue, the second phase shifter ϕ 2 is set to −ϕ j and θ 2 = θ j to realize the projection onto ⟨ψ j |.The overlap r i,j between the two states is given by the probability to detect a photon in mode 0. The inequality functional reported in the l.h.s of Eq. ( 26) is represented by the graph in Fig. S6 (a).Red (dashed) edges correspond to overlaps weighted by a −1 phase, while light blue (full) edges correspond to overlaps weighted by a +1 phase in the inequality.The maximum violation is 5 √ 5/4 ≈ 2.795.This bound is saturated, for example, by states distributed on the vertices of a regular pentagon that lies on the equator of the Bloch sphere as shown in Fig. S6 (a).The bound for coherence-free states is the value on the r.h.s. of the inequality, that is, 2. We experimentally measure all the possible overlaps r i,k with i, k ∈ {0, . . ., 4} of the states in a set with maximum violation, by tuning the parameters θ i = π/4 and ϕ k = k • 2π/5, for all k.In Fig. S6 (b) we report the matrix of the overlaps compared to the theoretical one.The corresponding violation is equal to 2.794 ± 0.007 and it is consistent with the theoretical one within one standard deviation.We estimated the violations by considering only the upper triangular part of the overlaps matrix, i.e r i,j with i > j.These results prove the generation of (basis-independent) coherence inside the MZI.Supplementary Note 6. UPP's settings for preparing and measuring qudits In the following, we report how qudit states are generated and projected in the programmable UPP.We also show the circuit settings to obtain the maximum violation of the coherence witnesses. A generic qutrit state can be encoded from a single photon in mode 0 by setting the parameters of the circuit {θ 1 , θ 2 , ϕ 1 , ϕ 2 } in Fig. 4a to generate the following state The maximum violation of h 4 has been reached by the qutrits with the following amplitudes on the {|0⟩ , |1⟩ , |2⟩} basis: , i 1 3 Fig. 4b of the main text reports the circuit to encode a generic ququart in a single photon that enters from mode 1 of the device.The qudit expressed through the parameters {θ and the amplitudes of the set of states for maximizing the h 5 violation expressed in the computational basis are The circuit employed for the 5-mode qudits is not universal in the sense that it cannot generate generic states.The limitation was imposed by the number of layers of MZIs in the 6-mode UPP that was not enough to individuate two separated and independent preparation and measurement stages.The 5-mode states that we could prepare and measure are parameterized by |ψ⟩ = sin θ 1 cos θ 2 sin θ 4 |0⟩ + sin θ 1 cos θ 2 cos θ 4 |1⟩ + sin θ 1 sin θ 2 e iϕ1 |2⟩ + cos θ 1 sin θ 3 e iϕ2 |3⟩ + cos θ 1 cos θ 3 e iϕ3 |4⟩ (31) where {θ 1 , θ 2 , θ 3 , θ 4 , ϕ 1 , ϕ 2 , ϕ 3 } are the angles of the tunable beam splitters and phase shifters reported in Fig. 4c and the input mode is |2⟩.The maximum violation of h 6 is reached by the following states, described by the parameters: Supplementary Note 7. Effects of experimental noise in the inequalities estimation In the main text we reported the measurements of various inequalities aiming at witnessing coherence and the dimension of the Hilbert space.Such quantities require the calculation of overlaps r i,j = | ⟨ψ j |ψ i ⟩ | 2 between pairs of states that belong to a given set.The main sources of errors in the experimental calculation are the statistics of photon counts and the imperfections in preparing |ψ i ⟩ and measuring ⟨ψ j |.The uncertainties reported in the main text derive only from the poissonian statistics of single-photon counts.We refer to such errors as σ c .In this appendix we analyse the effects of an imperfect setting of the optical circuits parameters instead. The implicit assumption of the h n inequalities is that the overlap function r i,j is symmetric.In the experiment this assumption could not be perfectly satisfied.For example, the same state ψ i has to be encoded both as a ket and as a bra.The parts of the circuit dedicated for the preparation and the measurement are separated and involved different variable beam splitters and phase shifters.Any small mismatch between the two stages in encoding the same state ψ i makes the overlaps matrix not symmetric, i.e r i,j ̸ = r j,i .This effect generates errors in the h n estimation that may lead to over-or under-estimates.The reason is that the effective number of different states in the set is larger than n.For example in the experiment estimating h 5 , considering only the upper triangular part of the overlaps matrix as we did in the main text, the state ψ 1 is encoded as a bra for the r 0,1 calculation but also as a ket in the preparation stage for r 1,2 , r 1,3 and r 1,4 .An analogous consideration holds for states ψ 2 and ψ 3 .Thus, it follows that the actual number of states involved in the h 5 calculation could in principle be up to 8. In Fig. S7 we investigate such an effect that follows from errors in the circuit settings that prepare and measure the states for the h 5 calculation.We report the experimental estimate h exp 5 = 1.391 ± 0.011 in black which is larger than the maximum theoretical value h theo 5 = 1.375 in red.We consider two types of errors in the parameters {θ 1 , θ 2 , θ 3 , ϕ 1 , ϕ 2 , ϕ 3 } that generate the ququarts and in the angles {θ 4 , θ 5 , θ 6 , ϕ 4 , ϕ 5 , ϕ 6 } which perform the projection.The first one ϵ is a relative error in the setting of the angles in the circuit.The second type δ is an additive error, i.e. a bias in the angles.The red (blue) shaded area indicates the maximum dispersion of h 5 at a given error ϵ (δ).It is evident from such analysis how much h 5 is sensitive to small errors in the optical circuits.In orange we highlight the errors value for which the h 5 dispersion is equal to 2.5 and 10 σ c , the uncertainty associated to the photon counts of the experiment.On one hand, the two plots give an explanation for the value h exp Waveguides were directly written 25 µm below the top surface of the glass substrate, each by 6 overlapped laser scans.For the waveguide inscription the laser repetition rate was set to 1 MHz and 330 nJ pulses were focused by a 20× water-immersion objective (0.5 NA) while the substrate was translated at constant speed of 25.0 mm/s.After irradiation, the substrate underwent a thermal annealing process [43,44] which improves propagation losses and optical confinement properties of the waveguides.The resulting optical insertion loss of the whole circuit is about 3 dB. Thermo-optic phase shifters are based on resistive microheaters deposited on the top surface of the optical chip, precisely above the positions of the waveguides where a tunable phase term needs to be introduced.By driving a controlled current into the microheater, a controlled temperature increase is achieved locally within the substrate, which in turn induces, by the thermooptic effect, a proportional phase delay in the light propagating in the waveguide.To fabricate the resistive microheaters we used a technique similar to the one described in Ref. [45].Two metallic layers, respectively of chromium (5-nm thick) and of gold (100-nm thick), were deposited in sequence on the substrate surface, by thermal evaporation.Metal deposition was followed by a thermal annealing process in vacuum (300 • C for 3 h, p < 1 • 10 −5 mbar) to stabilize the resistivity of the metallic film.Finally, the same femtosecond laser used for waveguide inscription was used to pattern the microheaters: here femtosecond laser pulses of 200 nJ energy and 1 MHz repetition rate were focused by a 10× objective (0.25 NA), while translating the substrate at 2 mm/s constant speed.A single microheater has a length of 1.5 mm and a width of 10 µm, providing an average resistance value of about 125 Ω, and is connected to millimeter-wide contact pads by paths of minor resistance. To increase the efficiency of the phase shifters, and reduce cross talks between adjacent devices, insulating micro-trenches were also excavated on both sides of each microheater [46], using water-assisted laser ablation.In detail, we used the same Yb femtosecond laser system but we set the repetition rate to 20 kHz, and tuned the pulse-compressor to stretch the laser pulses to about 1 ps duration.Ablation was performed by focusing pulses with 1.5 µJ energy with a 20× water-immersion objective (0.5 NA); scanning speed was 4 mm/s.Each trench has a width of 60 µm and a length of 1.5 mm, i.e. it is as long as the microheater.The glass 'wall' between two trenches, into which the waveguide is inscribed and above which the microheater is deposited, is as thin as 20 µm.For practical reasons, the trench-excavation process followed the waveguide fabrication (namely, it was conducted after waveguide irradiation and annealing) and preceded the deposition of the metallic layers for phase shifter fabrication. As a final step of the process, the UPP was assembled on an aluminum heat sink along with interposing printed circuit boards and fiber arrays (single-mode at the input and multi-mode at the output) in order to guarantee easy electrical and optical access to the device. Once fabrication and packaging were complete, the UPP was subject to an extensive calibration procedure with the aim of fully characterizing the relation between electrical currents and phase terms induced throughout the circuit.This calibration process is based on the following model.Under the assumption of perfectly balanced (i.e.50:50) directional couplers, the normalized optical power P cross at the cross output of the MZI is expressed by where θ is the internal phase of the MZI, namely the phase term that we tune to change the splitting ratios of the variable beam splitters of the UPP.In principle, due to thermal cross talk effects, the phase delay θ i induced on the i-th MZI depends on the electrical current I j flowing through the j-th microheater and this dependence is governed by Joule's law of heat dissipation.In formulas: where θ 0,i are static phase contributions related to fabrication tolerances, α i,j are the thermo-optic coupling coefficients between microheaters and MZIs and β j are correction factors needed to take into account the dependence of the microheaters' electrical resistance on the temperature.Here, the superposition theorem is successfully employed in presence of the nonlinear correction factors β j thanks to the reduced cross talks and to the limited dependence of gold electrical resistivity on the temperature.Moreover, it is worth noting that, due to the peculiar geometry of the circuit (see again Fig. S8) horizontally neighboring microheaters are much further apart (some mm) than vertically neighboring ones (a few hundreds of µm).As verified experimentally, this means that we can neglect the thermo-optic coupling between microheaters and MZIs that are not on the same column, with a great advantage in terms of calibration effort and accuracy. Each MZI was individually characterized by employing coherent light at 785 nm (Thorlabs L785P25), an external photodetector (Thorlabs PM16-121) and a multichannel phase shifter driver (Qontrol Q8iv).A custom Python software was developed to enable fully automated measurements of the optical power transmission of the MZIs as a function of the electrical current flowing though a given microheater.The isolation procedure reported in [47] was adopted in order to guarantee that light was always entering only one of the MZI cell's inputs and that light coming from only one of its outputs was being detected.Optical fiber switches (Lfiber) were employed in order to route the light to a given input of the UPP and to allow the photodetector to collect the light coming from a given output.Finally, Eq. 33 and 34 were exploited to best fit the experimental data and extract the values corresponding to θ 0,i , α i,j and β j . Similar models and measurement procedures were employed also for the external phase shifters (i.e. the ones inducing the phase terms ϕ i at the input ports of the variable beam splitters).This was done with the same procedure reported for the internal phases θ, but interferometric rings were in this case purposely formed inside the UPP by exploiting multiple MZIs set to operate as mirrors (θ = π) or balanced beam splitters (θ = π/2) as reported in [48,49].The assumption of negligible heat diffusion along the horizontal axis was employed again, but this time phase terms induced vertically by either internal or external phase shifters in waveguide regions that do not correspond to physical phase shifters were remapped onto external phase shifters as horizontal cross talk terms following the algorithm reported in [50]. Once relation 34 is calibrated for all θ i and ϕ i , it is possible to invert it and thus obtain the set of the electrical currents I j corresponding to given phase settings (θ i , ϕ i ).In this work, the full calibration dataset was employed whenever the control of all the programmable MZIs composing the UPP mesh was necessary.In particular, we employed such a global calibration for the 5-mode qudits in the h 6 inequality.In all the other cases we preferred to perform ad hoc calibration procedures aimed at controlling a limited part of the processor with minimum experimental errors and, thus, maximum accuracy. Before starting the actual experiments, the UPP calibration was benchmarked through the following procedure: (I) extraction of a random matrix T ∈ U (6), (II) calculation of the corresponding phase settings through the decomposition algorithm described in [42], (III) implementation of the phase settings in the UPP by exploiting the calibration dataset, (IV) intensity measurements to reconstruct the moduli of all the elements of the experimental matrix T exp [51] and (V) evaluation of the Figure 1 . Figure 1.Mach-Zehnder interferometer (MZI) and its multimode generalization for quantum interrogation, coherence, and dimension witnesses.a) Any MZI can be ideally separated in a preparation stage (red), in which we prepare a qubit state |ψ(θ1, ϕ1)⟩, and a measurement stage (blue) that projects onto another qubit state |ψ(θ2, ϕ2)⟩.In a quantum interrogation experiment, the ?-box is an object that absorbs photons.b) In analogy with the MZI, a multi-path interferometer encodes d-level systems by a set of beam splitters θ1 and phase shifters ϕ1 that operate on the d mode as a unitary operator U (θ1, ϕ1).In the second stage, another round of beam splitters θ2 and phase shifters ϕ2 followed by a series of photodetectors detects photons at the d output ports.With a single-photon input at the top input mode |0⟩, this setup is capable of measuring two-state overlaps |⟨ψ2|ψ1⟩| 2 = |⟨0|U (θ2, ϕ2) † U (θ1, ϕ1)|0⟩| 2 . Figure 3 . Figure 3. Coherence and contextuality in a Mach-Zehnder interferometer.a) Scheme of the circuit employed for the quantum interrogation task.b) Efficiency η of the object's detection versus the reflectivity r = sin θ of the two beam splitters in the MZI.The red curve is the theoretical prediction while the red shaded area represents the deviations from the ideal model due to dark counts and imperfect calibration of the beam splitters.The error bars derive from the poissonian statistics of the single-photon counts. Figure 4 . Figure 4. Violations of hn inequalities by qudits.a-c) Circuit schemes for qutrits a), ququarts b) and 5-mode qudits c) preparation (red part)and measurement (blue).The single-photon signal enters from mode 0 for the qutrits case, from mode 1 for ququarts and from mode 2 for the 5-mode case.d-f) Comparison of the theoretical and the experimental matrices of the pairwise overlaps values ri,j = |⟨ψj|ψi⟩| 2 for the sets of states that maximize the violation of h4, h5, and h6.In particular, h4 is violated by sets of 4 qutrit states {ψ0, . . .ψ3} in d), the h5 inequality is violated by sets of 5 ququart states {ψ0, . . ., ψ4} e) and h6 is violated by sets of 6 quantum states of dimension 5 {ψ0, . . ., ψ5} f).The uncertainty reported for each inequality derives from the poissonian statistics of the single-photon counts. Figure 5 . Figure 5. hn inequalities as dimension witnesses.Random states sampled uniformly in Hilbert spaces of dimensions 2 (blue), 3 (purple) and 4 (green).Bold points correspond to ∼ 200 experimental preparations of sets of uniformly random qudit states for each dimension.The uncertainties are smaller than the points' size.The shaded points in the background are ∼ 5000 sets of numerically simulated states for each dimension.The dotted red line indicates the threshold value 1 for hn required to witness coherence.a) Distributions of the h4(r) values.The purple line is the maximum violation of the h4 inequality for state dimension d = 3. b) Same analysis for the h5 inequalities.Here, we observe a lower bound for d = 2, highlighted by the blue line, that allows better discrimination of high-dimensional sets of states.The green line is the maximum violation achieved with the set which includes states of d = 4. c) Distributions for the h6 inequality.We report only the maximum violation measured for the coherence witness since our setup is not a universal state preparator for d = 5.Blue and purple lines highlight the maximum values of h6 for d = 2 and d = 3 respectively. Figure S1 . Figure S1.Examples of the graphs Kn representation of the inequalities, where n are the number of vertex.The vertex represents the quantum states and the edges the pairwise overlaps.The inequalities are expressed through the number of edges weighted with +1 (blue lines) and −1 (red dotted lines).For example, the K5 graph represents the hMZI in the configuration on the top or the h5 in the bottom one. Figure S2 . Figure S2.Numerically testing dimension and coherence witness using semidefinite programming.(Color online) We use a quadratic semidefinite programm (SDP) to generate (tight) upper bounds on the maximum value of hn(r) for any value n ≥ 4, for any dimension d ≤ n − 1. Plot (a) shows the result for n up to 19.We compare the results obtained using SDP with the results obtained by maximizing hn over all parameters of all quantum states present in a given complete graph Kn.Purple stars and pentagons corresponds to points from table S1, and we use those to benchmark the SDP results.Cases n = 4, 5, 6 were also investigated experimentally.We see that for all those values of n the inequality hn(r) ≤ 1 is, at the same time, bounding coherence and dimension.Plot (b) we use the fact that the quadratic SDP can be used to estimate the validity of the conjecture that only states with dimension d ≥ n − 1 can violate the inequality hn(r) ≤ 1 for 4 ≤ n ≤ 800.We see that the maximum quantum violation as n → ∞ is given by 1/2 and that the upper bound in the value for d = n − 2 is always 1.In both plots (a) and (b) open circles correspond to maximum values of hn(r) found using the SDP.Blue full lines mark violations of the Kn inequality, and have d = n − 1 while red dashed lines correspond to points that do not violate the inequality and have d = n − 2. Figure S4 . Figure S4.Efficiency of quantum interrogation.(a) Efficiency η operationally described in terms of successfully detecting the object as a function of a single parameter θ encoding the difference in transmission/reflectivity rates in the beam splitters.In the upper curve (blue) we have what is achieved with quantum theory and in the lower curve (pink/purple) the upper bound on the efficiency of noncontextual models.Robustness is captured by the degree of depolarizing noise captured by a parameter ν from the channel (1 − ν)ρ + νI/2, for any state ρ prepared inside the interferometer.The purple triangles corresponds to the efficiencies η = 1/3 and η = 1/2 achieved using the toy model from Ref.[34].The blue star corresponds to a choice of θ = 5π/6 that can be translated into a proof of contextuality as originally presented in Ref.[27], and that achieves η(5π/6, 0) = 0.428.If ν > 0.057 the gap is lost and it is impossible to conclude contextual advantage for the task.(b) Curve marking, for each value of θ, the noise level for which any claim of contextual advantage is lost.Visually, this curve marks the intersection of the two curved regions present in (a). Figure S5 . Figure S5.Quantum state tomography for states of the noncontextuality inequality.In (a) we consider the states that are present in the prepare-and-measure scenario, and highlight those used for obtaining a violation of the (robust) h3 inequality from the K3 graph.From (b)-(d)we present the results of the quantum state tomography, as well as confirmation that those states satisfy the operational equivalences Eq. (18) within experimental error. 5 Figure S6 . Figure S6.Symmetric quantum states witnessing coherence in the Mach-Zehnder from high inequality violation.a) Scheme for qubit encoding and measurement.The first tunable beam splitter θ1 and phase shift ϕ1 prepare the qubit |ψi⟩.The second pair (θ2, ϕ2) performs the projection on ⟨ψj|.The violation is maximized by qubit states equally spaced on a great circle of the Bloch sphere.The inequality is schematized by the graph in the figure.Blue edges are overlaps that must be summed and in red the ones to be subtracted.b) Overlaps rij for the maximum violation set on the Bloch sphere equator. 5 Figure S7 .Figure S8 . Figure S7.Errors in the optical circuit settings.a) Effect of imperfections summarized in the relative error ϵ associated to the angles {θ1, θ2, θ3, ϕ1, ϕ2, ϕ3} and {θ4, θ5, θ6, ϕ4, ϕ5, ϕ6} in the h5 estimation.The red area covers the maximum dispersion associated to h5 for a given level of ϵ.The yellow lines highlight the values of ϵ which produce errors in the h5 estimation equal to 2.5 and 10 σc, where σc is the uncertainty due to the single-photon counts statistics.The latter is reported as the grey area around the h exp 5 value in black.We report in red the theoretical value h theo 5 .b) Same analysis for additive errors δ in the angles of the circuit. Table I . Experimental results for witnesses of coherence and dimension.In bold the hn values that we measured for the coherence witnesses, and in Roman the maximum experimental values measured in the uniform random sampling of ∼ 200 sets of states of dimension d. Table S1 . The results are shown in tableS1below.Maximal values for hn inequality functionals from Kn graphs.Letting hn(r) be the functionals, bounding incoherent models for hn(r) ≤ 1, we numerically investigate the maximal values of hn(r) for n = 3, . . ., 10 the number of states and d = 2, . . ., 10 the dimension of the Hilbert space of all the states.The violations obtained for each inequality should not decrease as we increase the dimension, some examples of this in the table result from inefficiencies of the NMaximize Mathematica function, e.g. in violations of h8 for d = 7, 8.
2023-11-05T05:06:36.541Z
2023-11-03T00:00:00.000
{ "year": 2023, "sha1": "11648dc6aa5ffe4983cb208590fcd7cc793cf2cc", "oa_license": "CCBYNC", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adj4249?download=true", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "563eb2a61dc7dc1a226637d0410563a4999d31b5", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
257597797
pes2o/s2orc
v3-fos-license
Chemoinformatics approach to design and develop vanillin analogs as COX-1 inhibitor Background Coronary Heart Disease (CHD), commonly known as the silent killer, impacted the severity of COVID-19 patients during the pandemic era. Thrombosis or blood clots create the buildup of plaque on the coronary artery walls of the heart, which leads to coronary heart disease. Cyclooxygenase 1 (COX-1) is involved in the production of prostacyclin by systemic arteries; hence, inhibiting the COX-1 enzyme can prevent platelet reactivity mediated by prostacyclin. To obtain good health and well-being, the research of discovery of new drugs for anti-thrombotic still continue. Objective This study aims to predict the potential of 17 compounds owned by the vanillin analog to COX-1 receptor using in silico. Methods This research employed a molecular docking analysis using Toshiba hardware and AutoDock Tools version 1.5.7, ChemDraw Professional 16.0, Discovery Studio, UCSF Chimera software, SWISSADME and pKCSM, a native ligand from COX- 1 (PDB ID: 1CQE) was validated. Results The validation result indicated that the RMSD was <2 Å. The 4-formyl-2-methoxyphenyl benzoate compound had the lowest binding energy in COX-1 inhibition with a value of -7.70 Å. All vanillin derivatives show good intestinal absorption, and the predicted toxicity indicated that they were non-hepatotoxic. All these compounds have the potential to be effective antithrombotic treatments when consumed orally. Conclusion In comparison to other vanillin derivative compounds, 4-formyl-2-methoxyphenyl benzoate has the lowest binding energy value; hence, this analog can continue to be synthesized and its potential as an antithrombotic agent might be confirmed by in vivo studies. Introduction Coronary heart disease is the leading cause of death worldwide and is known as the silent killer. At the period of the COVID-19 pandemic, coronary heart disease had a greater impact on the severity of COVID-19 patients than on those without coronary heart disease. 1 This is due to inflammation in the myocardium and microvascular dysfunction carried on by SARS-CoV-2, which promotes coronary plaque formation and even mortality. 2 Coronary heart diseases are caused by the buildup of plaque on the coronary arteries and other arterial walls throughout the body. Plaque formation on the coronary artery walls of the heart is induced by two processes, namely atherosclerosis and thrombosis. Thrombosis refers to the formation of a blood clot in both veins and arteries. Blood flow in the passageways that exist between blood cells such as platelets, plasma proteins, coagulation factors, and the endothelial lining of arteries and veins can be restricted by obstructions. 3 Aspirin is a common antithrombotic treatment, which still has side effects in the form of gastrointestinal disturbances 4 and induce bleeding. 5 As a side effect, important to accomplish is the development of novel antithrombotic. To gain good health and well-being, the efforts of discovery new compounds as antithrombotic agents continue developing. The utilization of technology, such as chemoinformatic remains a great assistance in drug discovery and drug development. Chemoinformatic plays a crucial role in the characterization of natural materials with the physicochemical properties of a chemical compound. 6 Machine learning is essential for drug designers to collect chemical information from enormous databases of substances in order to build drugs with certain biological features. 7 Computer Aided Drugs Discovery is a computer method utilized for the discovery of novel drugs, such as molecular docking. 8 Computational approaches could offer a significant contribution to the development of natural ingredients-based drugs and support experimental research during the early discovery phase. An important aspect of chemoinformatic is the search for lead chemicals and target proteins that affect the targeted activity. Cyclooxygenase 1 is one of the target proteins for reducing the formation of thrombosis (COX-1). COX-1 is involved in the production of prostacyclin by the systemic arteries; 9 hence, inhibiting the COX-1 enzyme can prevent prostacyclin-mediated platelet reactivity. 10 An antithrombotic mechanism with a COX-1 target can block the actions of thromboxane A2 (TxA2) in enhancing platelet aggregation, which can lead to severe cardiovascular thrombosis and cerebrovascular illness. If COX-1 is suppressed, TxA2 synthesis is reduced and platelet aggregation along with plug formation are decreased as well. 11 Many substances have been shown to reduce thrombosis by inhibiting COX-1, including ferulic acid. Beside inhibit COX-1, ferulic acid may also interact with one of the P2Y 12 receptor. 12 Vanillin is precursor to synthesize ferulic acid. 13 It was reported that vanillin exhibited in vitro anti-platelet aggregation activity which is generated by arachidonic acid, although it was inferior to aspirin in the control group. 14 In the antiplatelet aggregation test with induction variation conducted, vanillin inhibits 100 % of the aggregation induced by arachidonic acid in vivo. However, vanillin displays only low inhibition effect in test animals induced by collagen, thrombin, and platelet activity factor. 15 In this study, vanillin was selected as the lead compound whose structure would be modified to generating some vanillin derivatives to enhance their bioactivity. Biological activity of a compound may be enhanced by modifying the molecular structure of the lead compound. The Topliss method is one of the molecular modification techniques that employs the various substituents that are predicted to increase the biological activity of compounds based on the physicochemical properties of substituents according to Hansch's concept of structure-activity relationships. Attaching a substituent possessing lipophilic, electronic, and steric properties at specific position on the benzene ring, could modified the lead structure producing some molecules with greater activity. 16 According to previous report, modifying the structure of ferulic acid by changing the phenolic -OH group at the para position into an ester could produce additional hydrophobic interactions of alkyl groups (-R) and aromatic rings with amino acids at binding site of the receptor. Besides that, the presence of the additional carbonyl group in the ester formed increased the number of hydrogen bonds, which is needed for the increase in antiplatelet activity of the modified compound. In this study, the structure of the vanillin were modified by replacing the phenolic OH group with a various of esters. 12 Then, modification of the benzene ring of the compound was a variation of the acyl group of the ester resulting in 17 designed compounds. The Topliss approach was used to generate 17 vanillin analogs that would be evaluated by in silico methods utilizing COX-1 enzyme to determine the compounds that would be developed as anti-thrombotic candidates. In addition to the inhibitory activity against COX-1, the ADME and toxicity (ADMET) profiles of the compound were important considerations in the selection of compounds to be developed as drug candidates. Prediction of ADMET profile could provide an important information of the type of compounds that would be present in the human body. 17 Therefore, the pkCSM tool will be used to evaluate the ADMET properties of the 17 designed compounds. On the basis of the in silico analysis of molecular docking, ADMET predictions, and the Lipinski Rules of Five, one compound having the best profiles in these study would be selected to be continued in research for antithrombotic agents. Receptor preparation The protein used as receptor was COX-1 (PDB ID: 1CQE) containing the native ligand FLP in chain A, which was downloaded from http://www.rcsb.org/pdb/ in a pdb format. Then the receptor was separated from its natural ligands and the water molecules using the Discovery Studio 2017 program. The docking validation was done by re-docking the natural ligand flurbiprofen to the receptor using the AutoDock Tools version 1.5.7 program. The validation parameter used was the RMSD (Root Mean Square of Deviation) and the validation successful criterion was the resulting RMSD value <2.0 Å. 18 Ligands preparation The structure of vanillin analogs as ligand was drawn in twodimension (2D) using the ChemDraw version 20.1.1 program, then it was changed to a three dimension (3D) structure and saved in pdb format. The energy minimization of the 3D molecular structure was performed using the Avogadro program with the MMFF94 calculation which aims to obtain the most stable conformation before the simulation of ligand-receptor, then the optimized structures were save the file in pdb format. The classification of ligands in this study can be seen in Table 1. Molecular docking analysis After the validation met the requirements, the research was continued by performing molecular docking simulations for vanillin derivatives in the same way with two replications. The data obtained from the docking simulation were the binding energy (kcal/mol) that obtained from the best pose of ligand-receptor complex and the inhibition constant (Ki). The prepared ligands were set on 100 independent genetic algorithms (GA Runs). The parameter in docking simulation were set as: the population size 150, the maximum number of energy evaluations 2.500.000 (medium) and the maximum 27.000 generations, the maximum number of active sites 1, the gene mutation rate 0.02 and the crossover rate 0.8 used in the genetic algorithm method (4.2). 19 Visualization of ligand-receptor interactions In molecular docking studies, the visualization process plays an important role in determining the amino acid residue interactions that occur between the ligand and the target protein (receptor). All visualizations were evaluated using Discovery Studio program. 20 Lipinski rules of five & ADMET prediction In discovering the potential for new medicinal compounds, there were some rules that could be used to assess the effectiveness of a compound whether pharmacologically active compounds could be promoted as a drug that would be given orally to humans, including the Lipinski Rule's of Five. To complete, it was also carried out an ADMET prediction to determine the pharmacokinetics of drug candidates including absorption, distribution, metabolism and excretion in the human body. Both predictions were done by processing the molecular structure of each compound in SMILES code at the pkCSM on line sites 21 and SWISSADME sites. 22 Results The results of molecular docking were shown in Table 1, which contained data of binding energy, inhibition constants, and hydrogen bond interactions with amino acid residues in COX-1. Table 2 contains the results of the parameters in Lipinski Rule's of Five (RO5) were analysis using SwissADME. The result of ADMET profiles of vanillin analogs can be seen in Table 3. The interactions of aspirin and 4-formyl-2-methoxyphenyl benzoate visualized using the Discovery Studio (DSV) 2017 program, which can be seen in Figure 1, denoted dashed-line showing the interaction between the ligand and amino acids in the receptor. Figure 2 showed visualizations of the type of amino acids involved in ligand-receptor interaction and hydrophobic interactions of the selected ligands. Discussion Cyclooxygenase 1 (COX-1) plays a role in producing prostacyclin by systemic arteries, 9 so that when the COX-1 enzyme was inhibited, it could block the reactivity of platelets which mediated by prostacyclin. 10 One of the COX inhibitor drugs is Aspirin, belonging to the NSAID class, whose side effects often occur was a gastrointestinal disturbance which could lead to gastric ulcers. 4 So that there is a need for the discovery of new drugs from natural ingredients that are relatively safer and more nutritious. In this study, the receptor used was protein COX-1 (PDB:1CQE), where it still bound to its natural ligand, flurbiprofen. 23 The separation of the receptor from its natural ligand was carried out using the Discovery Studio 2021 program, and firstly the water molecule had to be removed so that the docking simulation could run effectively. 24 The validation process was carried out to determine the suitability of the method we were working on and the RMSD value which described the stability of distance of the docked ligands. 25 RMSD of the validation process must be worth <2 Å so that it could be interpreted that the method used was appropriate and correct. In the validation process, the result was 1.42 Å by setting the grid box in the dimensions of x, y, z as 40, 40, 40 and the coordinates of the center grid box as x (31.082), y (37.894), and z (205.665) which aims to determine the binding site where the interaction between the ligand and the target protein take placed. 26 Further studies were carried out on fifteen vanillin compounds classified in 3 groups, namely chain extension, aromatic substitution with electron-donating groups, and aromatic substitution with electron-withdrawing groups, for COX-1 as protein target. The results of the binding of binding energy were arranged according to the ranking from the best binding energy ( Table 1). The lower the binding energy value, the more stable the conformation of the ligand-target protein complex. 27 The compound 4-formyl-2methoxyphenyl benzoate had the best binding energy against COX-1 with a value of -7.70 Å among the other compounds. Aspirin has a binding energy of -4.70 Å, while all compounds of the modified vanillin showed better potency than Aspirin in inhibiting COX-1. The inhibition constant indicates the strength of the inhibitory action of a compound with the receptor, the smaller value of Ki, the stronger the inhibition. 28 The compound 4-formyl-2-methoxyphenyl benzoate also has the best Ki value among the other compounds, which was 2.28 μM, while Aspirin has Ki of 356.01 μM. In molecular docking studies, the visualization plays an important role. One of them is to determine the amino acid involved in interaction that occurs between the ligand and the target protein (receptor). Visualization of amino acid residues involved in interactions aims to see the type of interactions including van der Walls interactions, electrostatic interactions, hydrogen bonds, hydrophobic interactions (pi-alkyl, alkyl-alkyl, pi-sigma pi-pi interactions), and halogens. 29 The type of binding that was important in the molecular interactions was the hydrogen bond; in the DSV program, this bond might be divided into two conventional hydrogen and carbon hydrogen. Conventional hydrogen bonds are stronger than carbon hydrogen bonds, 30 besides supported interactions such as hydrophobic interactions, they also showed that compounds were increasing in their hydrophilicity, thereby increasing in the stability of ligand-receptor interactions. 31 Compared to binding energy of the native ligand (-7.99) and 4formyl-2-methoxyphenyl benzoate (-7.70), the binding energy of aspirin was the lowest (-4.70) (Figure 1). The number of hydropho- bic interactions and the presence of hydrogen bonds determined the intensity of the binding between ligand and receptor ( Figure 2). Aspirin formed one conventional hydrogen bond with SER A:530, whereas 4-formyl-2-methoxyphenyl benzoate has two conventional hydrogen bonds with SER A:530 and ARG A:120, giving it a lower binding energy so a better binding affinity. Furthermore, aspirin has 4 hydrophobic interactions, including Pi-Sulfur interaction, Pi-Pi T shaped interaction, Carbon Hydrogen Bond interaction with TRP A:387, MET A:522, ILE A:523, and TYR A:385, while the compound 4-formyl-2-methoxyphenyl benzoate has 8 hydrophobic interactions, namely Pi-Sulfur interaction, Pi-sigma interaction, Pi-alkyl interaction, Pi-pi T shaped interaction and Amide-Pi T stacked interaction with TRP A:387,GLY A:526,TYR A:385, LEU A:352, MET A:522, LEU A:531, ALA A:527, and VAL A:349, so this also offers binding affinity better than aspirin. The compound 4-formyl-2-methoxyphenyl benzoate shared similarities with native ligand, specifically an amide-Pi Stacked interaction with GLY A:526 and Pi-Sigma interaction of benzene ring with ALA A:527. In the native ligand, there was an attractive charge-charge interaction between the negatively charged and deprotonated carboxylate group O with ARG A:120, which might strengthen the ligand-receptor binding. Therefore, the native ligand has slightly higher binding energy than 4-formyl-2methoxyphenyl benzoate. Article The Lipinski Rule's of Five (RO5) analysis (Table 2) aims to explore the pharmacokinetic properties of a drug molecule including absorption, distribution, metabolism and excretion in the human body 32,33 and predict the similarity of the molecular properties of compounds with an existing drug (Drug Likeness). The results obtained were that the molecular weight of 17 compound of vanillin analogs were less than 500 Da, meaning that the compounds abled to diffuse and penetrate cell membranes. The log P values was related to the ability of the compound to dissolve in a non-polar solvent, lipids, fats, and oils. All compounds showed log P values< 5 meaning that the compound was hydrophobic and could penetrate the lipid bilayer. Hydrophobicity plays a role in determining the drug's distribution in the body after the absorption process and the rates for metabolism and excretion. 34 The number of hydrogen bond donor (HBD) has the requirement that it must be<5 and the number of hydrogen bond acceptor (HBA), which must be<10 and all compounds met the criteria. This number HBD and HBA showed that the more the number of H-bonds, the more energy was needed in the absorption process. 35 Molar refractivity (MR) indicated the polarity of a compound and has a requirement that it must be in the range of 40-130 and the MR of all compounds falls within that range. All tested compounds of vanillin analogs met the requirement of RO5, which meant these compounds have good absorption and were potentially effective for oral consumption by humans. The permeability of caco-2 is used to see the process of drug absorption through the intestinal epithelial cell barrier in humans. 36 Only aspirin had a low permeability, while all vanillin derivatives possess a good value. Then, intestinal absorption (IA) plays an important role in maintaining the bioavailability of a drug to reach its target site. The ability of the compound to be absorbed is poor if the IA percentage<30%, 21 and all vanillin derivatives have a better value than aspirin and the parent vanillin. Furthermore, volume of distribution at steady state (VDss) is the theoretical volume that the dose of total drug needs to be distributed evenly to deliver the same concentration as in blood plasma. The higher the value volume of distribution, the more drugs are distributed in the tissue than plasma. Log VDss value>0.45 indicates a high distribution volume, while the Log VDss value<-0.15 indicates a low distribution volume. 21 Based on the results obtained, only compounds 4-formyl-2-methoxyphenyl 4-acetate, 4-formyl-2-methoxyphenyl 4propionate, 4-formyl-2-methoxyphenyl butyrate, 4-formyl-2methoxyphenyl pentanoate, and aspirin which were used has a low VDss value. Then, regarding the Central Nervous System (CNS), these parameters play an important role in the targeted drug assessment on CNS. Compounds with a CN log value >-2 means that they can penetrate CNS (good), whereas a compound which has a log value of CN <-3 is considered unable to penetrate the CNS 21 , and all vanillin analogs meet good criteria. The results indicate that 17 vanillin derivatives have the potential to be used to treat CNS diseases ( Table 3). The Blood Brain Barrier (BBB) or the blood brain barrier is a system, a protective barrier consisting of tightly packed endothelial cells to filter certain substances to enter, from the blood into the brain. 37 The BBB permeability (log BB) can also be used as a parameter which can help to predict the process of reducing side effects and toxicity or increasing the pharmacological efficacy of a drug designed to target the receptor in the brain. Compounds having log BB value >0.3 considered to be well distributed or able to cross the blood barrier brain, whereas compounds with Log BB<-1 were not well distributed into the blood brain barrier. 21 According to the results, all compounds have log BB less than -2, so it can be interpreted that the compounds were predicted to be less potential to be used as drug candidates that work to penetrate the blood brain barrier. There are two main isoenzymes responsible for drug metabolism, namely cytochrome P2D6 (CYP2D6) and cytochrome P3A4 (CYP3A4). Almost all vanillin derivatives do not affect or inhibit CYP2D6 and CYP3A4 enzymes, except the compounds 4-formyl-2-methoxyphenyl benzoate and 4-formyl-2-methoxyphenyl 4-(trifluoromethyl) benzoate, which can affect CYP3A4 enzymes, so it can be predicted that their derivatives tend to be metabolized by P450 enzymes in the body. Total clearance shows the rate of excretion of a drug and plays a role in determining the drug dose in achieving steady-state concentrations and drug bioavailability. 21 High clearance rate will cause fast drug excretion, while a low clearance rate will cause the drug to be slowly excreted and can cause toxicity. All the test compounds have a range of amounts and excretion rates between -0.008 to 1.531. Lastly, in the prediction of toxicity, all vanillin derivative compounds do not have hepatotoxic properties, and the LD50 value is in the range of 2,012-2,267, included in category 5, which is non-toxic. Conclusions From all results of this study, it was concluded that all designed vanillin derivatives had good drug likeness and ADMET profiles. The compound 4-formyl-2-methoxyphenyl benzoate has the greatest potential activity as an anti-thrombotic agent based on its in silico interaction with COX-1 enzyme. These results must be proven by further research with synthesis and in vivo study.
2023-03-18T15:12:20.154Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "a3dd79974f3a7e99c0b44f23b6eaf4a6c3c8f241", "oa_license": "CCBYNC", "oa_url": "https://www.publichealthinafrica.org/jphia/article/download/2517/928", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "174092075f72fe1f0f09c44b05d6f7256b73cbee", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
242032868
pes2o/s2orc
v3-fos-license
Recycling Marine Plastic into Clothing Apparel via Global Collaborations : According to the United Nations, aquatic pollution affects at least 800 species worldwide, with plastic responsible for up to 80% of the waste. Every minute, up to 13 million metric tonnes of plastic is expected to end up in the ocean, the equivalent of a trash or garbage truck load. Plastic is a design failure; it was never intended to end up in animals' stomachs or at the bottom of the food chain in humans. The fashion industry is a massive contributor to the plastic waste found in the oceans and so it becomes necessary for corporations to take sustainable steps in the direction of reducing Ocean Plastic Pollution. One of the ways to do so would be by recycling ocean plastic into clothes. Our study focuses on analysing global collaborations and suggesting a series of steps for recycling ocean plastic. A study conducted on the recycling of TPU reveals that on constant reprocessing (recycling), the degradation of the complex and soft segments takes place at different rates and is mass-dependent. The remarkable conclusion from this study was that performance of a fabric produced from TPU that has been recycled eight times was similar to that produced from TPU that has only been recycled once. [16] Marine litter is any persistent solid material that has been made or processed and is disposed of or abandoned into the marine environment, whether purposely or unintentionally. To put it another way, it is human-made waste that's dumped into the ocean from land or sea. Seventy-five per cent of plastic garbage that ends up as marine litter is an uncollected waste. Meanwhile, less than 10 per cent of American plastic waste is recycled, and the U.S. has a 30-year history of shipping half of its recyclable plastic overseas, primarily to China and other developing nations lacking the infrastructure to manage it. That practice was drastically reduced only when China stopped buying plastic scrap in 2018 as a green campaign to clean up its environment. Global trash generation would grow by 70% by 2050, from 2.01 billion tonnes today to 3.4 billion tonnes in 2050. Each year, Australians use 130kg of plastic per person. Only about 12% of that is recycled. Worryingly, up to 130,000 tonnes of plastic will find its way into our waterways and the ocean. According to the report, while China is the world's most significant producer of plastic, the United States is by far the world's largest generator of plastic garbage, producing over 42 million metric tonnes (46 million U.S. tonnes) in 2016. The United States also ranks third among coastal nations regarding litter, illegally deposited rubbish, and other mismanaged waste on its shorelines. Southeast Asia has emerged as a plastic pollution hotspot due to fast urbanization and a growing middle class, which is increasing its consumption of plastic products and packaging due to their convenience and versatility. However, local trash management infrastructure has not kept up, resulting in massive amounts of mismanaged waste. COVID-19 has aggravated the situation by increasing the consumption of masks, sanitizing bottles, and online delivery packaging. When single-use plastic is thrown rather than recovered and recycled, more than 75% of the material value of recyclable plastic is lost in Thailand, the Philippines, and Malaysia, amounting to $6 billion per year. With only 18 to 28 per cent of recyclable plastic collected and recycled in many nations, the majority of plastic packaging trash not only pollutes the environment by polluting beaches and roadsides but its economic worth is also lost. Sub-Saharan Africa and South Asia will grow the fastest, accounting for 35% of worldwide trash generation. The People's Republic of China (China), Indonesia, the Philippines, Thailand, and Vietnam account for more than half of the plastic trash input into the ocean. According to statistical estimates, China not only generates the most plastic (almost 60 million tonnes), but it is also the primary source of worldwide plastic leakage, i.e., plastics that are not adequately handled leak into the environment. According to a recent study, ten rivers account for 88 to 95 per cent of the global load of river-origin unmanaged plastic. Eight of the top ten polluting rivers are in Asia, with the Yellow and Yangtze rivers being the most polluting. Clean-up of beaches and waterways, street sweeping, installation of storm-water capture devices, cleaning and maintenance of storm drains, manual litter clean up, and public anti-littering campaigns to clean up and prevent marine litter cost communities along the West Coast of the United States more than $500 million per year. 1) Marine Litter's Two Main Sources a) Land-based Sources Rivers and other bodies of water are major entry sites for land-based garbage into the marine environment. Littering, dumping, and poor waste management methods are the primary causes of marine litter from land-based sources. b) Sources from the sea Shipping vessels, ferries and cruise liners, fishing vessels, private vessels, and other industry infrastructure contribute to marine litter from sea-based sources such as cargo, solid waste, and fishing gear. The most common cause of seabased marine litter is abandoned, lost, or otherwise discarded fishing gear, which can be the most devastating from both an economic and environmental standpoint. Discarding fishing gear causes marine creatures to become entangled, coral reefs and other vital habitats to suffocate, and navigational dangers to safety at sea. Plastic pollution is the most prevalent issue damaging the marine environment. It also endangers the health of the oceans, food safety and quality, human health, coastal tourism, and contributes to climate change. 2) Effects on the Maritime Environment: Ingestion, suffocation, and entanglement of hundreds of marine species are marine plastics' most apparent and unpleasant effects. Plastic garbage is mistaken for prey by marine species such as seabirds, whales, fish, and turtles, and the majority of them die of famine as their stomachs fill with plastic debris. They also have lacerations, infections, impaired swimming capacity, and internal damage. Floating plastics also aid in the spread of invasive marine species and germs, causing ecosystem disruption. 3) Food and Health Ramifications: Invisible plastic has been found in tap water, beer, and salt and all samples taken from the world's oceans, even the Arctic. Several chemicals used in the manufacture of plastic products are carcinogenic and disrupt the body's endocrine system, producing developmental, reproductive, neurological, and immunological issues in both humans and wildlife. Toxic pollutants can also build up on the surface of plastic products after prolonged exposure to seawater. When marine creatures consume plastic garbage, the toxins enter their digestive systems and build in the food chain over time. The transfer of pollutants between marine organisms and people via seafood eating has been identified as a health risk but has not been thoroughly investigated. 4) Climate Change Impact: Plastic, which is derived from petroleum, also contributes to global warming. When plastic garbage is burned, it emits carbon dioxide into the atmosphere, increasing carbon emissions. 5) Tourism Effects: Plastic trash degrades the visual value of tourist attractions, resulting in lower tourism-related income and significant economic expenditures associated with site cleaning and upkeep. Plastic, which is derived from petroleum, also contributes to global warming. When plastic garbage is burned, it emits carbon dioxide into the atmosphere, increasing carbon emissions. B. Indian Scenario 1) Aquatic Animals: In India, marine fishes along the southwest and southeast coasts of almost all the states show signs of plastic ingestion. On analyzing the gut content of commercially important fishes from the beaches of the southeast coast of India, it was revealed that microplastics ingestion was found in 10.1% of the fishes. 2) Beaches/Sediments: Researchers have found a significant amount of microplastics on the beaches from different coastal areas in India. In March 2015, right before the Chennai flood and in November 2015, right after it, the microplastic pellets in the surface sediments along the Chennai coast were examined for the source of origin, surface features, distribution, polymer composition and age. Elements of nature such as the wind and surface currents were credited for transporting and depositing these pellets from the sea to the beaches. A similar pattern was observed for the beaches in Goa as well. 3) Salt: India is considered a leading producer and exporter of sea salts. Most people consume food products containing commercial salt all their lives. Numerous studies are revealing the presence of microplastics in salts. As per the World Health Organisation (2012) guidelines, a salt intake of up to 5gm salt per day for adults is recommended. An IIT-B study on different brands of Indian sea salts found the presence of the microplastic particles in all the studied eight brands, and the types of M.P.s present were found to be independent of the salt brand and packaging type. The maximum microplastic ingestion for Indians from microplastic contaminated salts was close to 117 micrograms every year. Salt consumption can be considered long-term exposure to the human population and is a primary reason we must deal with the plastic waste in the ocean before they break down into non-retrievable microplastics. 4) Freshwater, Surface water, Groundwater, Drinking water, Ballast Water: Vast data exists in the literature on marine microplastics, while concise reports on other water sources. Microplastics were found in the sediments of Vembanad Lake, Kerala and the sediments of river Ganga. The microplastics found in the sediments of river Ganga revealed a strong correlation between microplastics abundance and pollution traits. The sediment samples from different sea beaches revealed different reasons like wind, surface current, fishing, religious activities, recreational activities etc. II. UNDERSTANDING PLASTIC A. Polymers Polymers are materials made of repeating units called monomers. They can be classified based on whether the polymerization can be reversed or not. Thermosetting plastics become infusible once set and it is hard to recycle them. On the other hand, thermoplastics undergo single stage polymerization and only form linear chains-this allows recycling and reprocessing. B. Thermoplastic Polyurethane (TPU) TPU is a thermoplastic elastomer which is a linear block made up of soft and hard segments. The hard segment can either be an aromatic compound or an aliphatic compound based on the qualities required in the end product. For instance, aromatic hard segments are usually used, but if the product's primary requirement is a specific colour and clarity retention in sunlight, then aliphatic hard segments are preferred. Aromatic TPU has isocyanates such as Methylene diphenyl diisocyanate (MDI), while aliphatic TPU has hydrogenated MDI. These isocyanates, when combined with short-chain diols, result in hard blockchains. The soft segment is usually either polyether-type or polyester-type and is chosen based on the requirement as well. For example, polyether-based TPU has its application in wet environments. Polyester based TPU, on the other hand, is used when oil/hydrocarbon resistance is a priority. Different combinations of hard and soft segments give the properties offered by TPU a wide range. The unique structure contributes to high resilience, resistance to abrasions, impact, and the weather. It offers flexibility without the use of plasticizers. Fig. 1. Structure of TPU A study conducted on the recycling of TPU reveals that on constant reprocessing (recycling), the degradation of the complex and soft segments takes place at different rates and is mass-dependent. The remarkable conclusion from this study was that performance of a fabric produced from TPU that has been recycled eight times was similar to that produced from TPU that has only been recycled once. [16] Fig. 2. Test Results for [16] To establish a basic understanding of the other plastic waste materials found on the coastline, we shall also look at plastic bottles and the polymers used in manufacturing them. The number mentioned within the recycling symbol is called the SPI code and is used to distinguish the different plastics. 1) PET-Polyethylene Terephthalate Plastic bottles made of this polymer are used as water/beverage bottles. a) A thermoplastic polymer whose opacity can be tweaked according to its material composition. b) It is produced from petroleum hydrocarbons via a reaction between ethylene glycol and terephthalic acid. c) PET waste, aka "post-consumer PET", is sorted based on opacity or colour or origin (the corporation within which it was produced-in case it needs to be sent back to the manufacturers for recycling), following which it is sent to material recovering facilities. d) The accumulated waste is then cleaned via hot and cold washing and subsequently through acidic water to ensure no dirt gets left behind before recycling. e) The waste undergoes treatment-i.e. crushing the bottles into small PET flakes. These are then used as raw materials for a variety of different recycled products. 2) P.E.-Polyethylene: High-density polyethene is used to make detergent bottles, while low-density polyethene is used to make squeeze bottles (ketchup bottles) a) It consists of a single monomer, ethylene and is therefore classified as a homopolymer. b) LDPE is amorphous, which explains its ductility, while HDPE is crystalline, accounting for its higher rigidity. c) Because of its internal structure, HDPE is more robust than PET and can be recycled/reused safely. d) Therefore, LDPE has the simplest structure and is easy to produce into plastic bags but not as easy to recycle. 3) V/PVC-Polyvinyl Chloride a) One of the oldest plastics b) Initially is very rigid but becomes flexible on the addition of plasticizers. c) PVC plastic contains harmful chemicals and is therefore not used in storing edible substances. d) They are mainly recycled into flooring or panelling. 4) P.P.-Polypropylene a) It is sturdy and can stand high temperatures. b) Used to make Tupperware, car parts, thermal vests, etc. c) It gets recycled to produce heavy-duty items 5) P.S.-Polystyrene Also known as Styrofoam a) Used in making disposable cups and dinnerware, packing materials, etc. b) Similar to P.P., it is hard to recycle even though some recycling plants may accept it 6) Misc. Plastics: These plastics are used everywhere but are considered unsafe because of the toxic chemical called bisphenol A or BPA. They are hard to recycle as they do not break down easily. Taiwan. FENC not only has a fibre bottle recycling rate of about 95%, which is the highest in the world, but is also a leader in textile dyes. Bottles dumped in the ocean are being made into high-quality shoes thanks to the joint efforts of Adidas, Parley, and FENC. According to Parley, the target of one million pairs of shoes was met, which means that in 2017, Adidas and Parley recovered and recycled eleven million water bottles. Upon obtaining by FENC, these water bottles are transformed into textile fibres used in Ultraboosts, such as recycled polyester and polystyrene, and, perhaps most importantly, thermoplastic polyurethane. Unlike conventional plastic textiles, made from petroleum and are therefore not environmentally friendly, the polyester used in the Adidas X Parley Ultraboosts is recycled. Plastic water bottles, also known as "PET bottles", are used instead of petroleum as a raw material. These 'PET' bottles are mainly collected from the oceans. PET bottles are sterilized before being compressed and crushed into tiny plastic chips, which is how recycled polyester is made. These chips are then heated and spun into long strands of yarn using a spinneret. The yarn is then woven into spools and crimped to give it a woolly texture before being dyed and knitted into the fabric used to manufacture shoes. The upper part, which covers the top of one's foot, is recycled polyester in the Ultraboost. The rigid insert that extends under the wearer's heel is called a heel counter, and according to an Adidas supplier called "Framas", it is composed of 50% recycled polystyrene from food packaging. They produce 110 million heel counters annually and prevent 1,500 tons of waste from ending up in landfills. On the other hand, traditional counters are usually made of new materials (unprocessed materials), such as thermoplastic rubber and polystyrene. The environmental benefits of recycled marine plastics outweigh the benefits of using virgin materials, but the concern of durability remains. A study conducted in 2004 by Kobe University in Japan found that recycled polyester fibres are more prone to fatigue than new fabrics. However, the analysis ignores the advantages of recycled materials and the growing urgency to combat plastic pollution, rapidly degrading our oceans. The ocean and our atmosphere benefit tremendously from using these resources, but they can also benefit us, the consumers. Thermoplastic polyurethane is perhaps the most significant recycled plastic material in the Ultraboost, and as we have studied before, it has various benefits. Due to its high energy absorption and lightness, the Ultraboost outsole uses bead foaming technology, utilizing recycled TPU. The same recycled plastic is used in the other parts of the shoe, and the technology is named "Infinergy" after its inventor, a German sports goods manufacturer. There are many advantages to using thermoplastic polyurethane; this material is solid and can withstand all temperatures, but the main advantage is the relatively low density of the bead foam while simultaneously having a very flexible texture. In the initial step in the high-tech thermoplastic polyurethane manufacturing process, a mixture of polymer and gas must be produced. Following this, the nucleation of cells occurs, and the nuclei that are produced work as centres of growth for the cell [24]. The cells continue to grow until they reach the desired size. The cells are stabilized by a sharp drop in temperature and prevent further expansion. Each bead is expanded by using a blowing agent. These particles are expanded in the mould and stick together to achieve the desired shape, bypassing the mould through hot steam. Made from recycled water bottles, these TPU soles are not only heat-resistant, lightweight, flexible and energy-absorbing, but also fully biodegradable. By adding a digestive enzyme called protease, the sole can be decomposed entirely in 36 hours. There is much scope for more research in this direction. TPU is also used in Adidas' recent prototype called Futurecraft Bio fabric. With the TPU outsole, this fabric can be used in new ©IJRASET: All Rights are Reserved Adidas shoes to make them more environmentally friendly. To shield the energy-absorbing ocean plastics, the TPU outsole is made of beads and encased in rubber. Continental, an Adidas affiliate and manufacturer, deal with the production of rubber. Continental creates a high-quality encasing for the outsole for the Ultraboost. Even though they make both natural and synthetic rubbers (both of which use crude oil as a primary raw material), the rubber used in the outsole of Ultraboosts is 100% natural and is called "Stretchweb." Natural rubber is derived from rubber trees and is environmentally friendly. Synthetic rubbers take a long time to degrade and often end up in landfills. Continental does this by using environmentally friendly fabrics to create a high-grip, high-energy-absorbing outsole. A TPU "Torsion device bar" is also found on the Ultraboost's outsole. This technology is designed to relieve pressure on the middle of the foot by allowing the heel and forefoot to travel independently. Finally, the most remarkable about the Adidas x Parley Ultraboosts is that these recycled materials were used without compromising the product's consistency. The industry-leading 'BOOST' technology is an example of this as the outsole is made of recycled thermoplastic urethane, which is biodegradable and has a natural rubber grip. The shoe upper is recycled polyester, and the heel counter is made of recycled ocean plastic. The use of both of these products together leads to the creation of a more environmentally friendly shoe. Adidas and Parley are making a difference in sneaker sustainability and recycled plastic use by using these approaches. Not only that, but they have also designed and manufactured one of the most technologically advanced and environmentally friendly sneakers available. 2) Supply Chain Analysis: Although this project has successfully converted plastic ocean waste into fully functional shoes over the years, it is essential to question whether the emissions from the production process counter the apparent environmental benefits. Adidas claims to have prevented 2,810 tons of plastic from reaching the oceans. Moreover, each item in the Parley collection is made from at least 75% intercepted marine trash. It is important to note that production using recycled polyester yarn has more benefits than using virgin materials, as recycled polyester uses fewer chemicals and raw materials while also helping in reducing plastic pollution. These benefits are not apparent but play a vital role in making the process of production greener. FENC, the Taiwan based corporation, was revealed to face two significant problems during the initial stages of production. First, the contamination of PET bottles collected from the oceans was quite different from the domestic waste they were earlier used to dealing with. They were equipped with an entire supply chain ranging from recycling to spinning fibres to weaving fabrics, but ocean waste was something they had never treated prior to this collaboration. Under different circumstances, contaminated waste from the ocean would have been classified as second-grade raw material. However, this was eventually overcome to some extent, and accommodations were made within the supply chain to account for processing cleaner raw waste materials-and although it is not quite up to the level of a typically recycled PET bottle-it is still a considerable feat. Another implication of ocean contamination meant the up-cycled fabric could only be dyed in a limited range of colours. The second major problem was the transportation of ocean waste from beaches in the Maldives to the processing unit in Taiwan. Prior to this, there was no proper recovery system in the Maldives. Usually, the collector would be expected to condense PET into bricks and efficiently transport over 20 tons of waste, but only eight tons were received on the arrival of the first shipment. This situation arose because the Maldives had never treated PET bottles from the ocean before, and a subsidiary of FENC, called Oriental Resources Development Limited, had to step in. They managed to set up recovery stations in twelve islands near the Maldives, which helped reduce transport and production costs, and treatment in Taiwan became much more accessible. As one can infer from the previously stated instances, the primary reason for hindrances in production was the lack of proper infrastructure required to convert plastic waste from the oceans. Both were technical issues that arose from the fact that recycling marine trash had never been done before. Once an efficient setup is built, CO2 emissions would not be a significant concern. Moreover, via this project, Adidas and Parley have managed to create and update the world's first supply chain for upcycled marine waste, which is constantly under a green microscope, i.e. focuses on sustainable fashion and industry choices. The outcome of this project was not just the shoe but an entire ideology that is expected to shape sustainability in fashion for the better. As the supply chain continues to develop and production accelerates, there will be less virgin plastic demand, reduced CO2 emissions and increased awareness of the issue. They have also expanded their collection network to places other than the Maldives like the Dominican Republic and Sri Lanka, thereby preventing increased plastic pollution in the Indian Ocean and the Caribbean Sea. The rapid production increase, i.e. 1 million pairs in 2017, 5 million pairs in 2018, 11 million pairs in 2019, and the planned estimate of 15 million pairs in 2020, can only indicate that this has been a very successful project and can be implemented on an even larger scale, by the development of green supply chains all around the world. B. The Ocean Cleanup Sunglasses 1) Introduction: In his independently published book, Boyan Slat talked about the feasibility of 'The Ocean Cleanup Array, a unique method to collect copious amounts of plastic debris from the famous accumulation region called the Great Pacific Garbage Patch (GPGC). He founded a non-profit organization called "The Ocean Cleanup" in 2013, which currently implements his ideology and collects mismanaged plastic waste from the GPGC, also known as the Pacific trash vortex, located in the central North Pacific Ocean. The first actual implementation, after years of modelling and research, was carried out in 2018. This system provided much insight but could not effectively catch plastic and had to return to the shore prematurely because of structural failure. Utilizing previous research and analyzing the inadequacies in System 001, a new cleanup system was developed called System 001/B. It was launched, and collected plastic which was then brought to the shore, and the project went into its valorization stage: the recycling phase. A significant aim has always been to convert the collected plastic into a durable product and fund further expeditions by selling the product above. Moreover, the plastic debris collected in 2019 from System 001/B, which included a large volume of fishing nets, was pooled to the shores of Vancouver and then later transported to Rotterdam in the Netherlands for the process of recycling. The contents were sorted, cleaned and then shredded into shards. These were then further processed to form small pellets, which were used to make the final product. 2) Process of Plastic Collection: The garbage patch is present in an ever-revolving vortex which causes shifting regions of high concentration of plastic. Using a computational model, the project predicts the position of these hotspots and a cleanup system is positioned accordingly. The basic ideology behind the retention of plastic is creating a false shoreline(a sort of physical barrier) upon which plastic washes upon. A relative speed difference is maintained between the plastic and cleanup system. The wingspan of the system, speed and direction are monitored according to the requirement and are sustained by the deployed vessels. The plastic debris then washes up on the false "shoreline" called the retention zone. Once the system is filled, the back of the retention zone is pulled back up on board. It is sealed, detached from the system, and the contents are emptied on the ship/vessel. The retention zone is then put back in the ocean, and the process of cleaning up continues. As the containers on the ship get filled, they are brought back to the shore for processing and converting into products. 3) The Making of the Sunglasses: The CEO and Founder of The Ocean Cleanup stated that their product out of the collected plastic was designed in California by Yves Behar. It was produced in Italy by a company called Safilo and sold for USD 199; all proceeds go to fund the operation of the plastic collection even further. C. Other Notable Endeavours G-Star RAW, Parley for the Oceans, Bionic Yarn and Pharrell Williams resulted in denim and apparel made of recycled ocean plastic. The first collection contained 33% recycled plastic, but as it serves to be a long term collaboration, the main goal would be to include more and more recycled plastic. The first line used 10 tons of recycled ocean plastic and was available in stores on September 1, 2014. The company has since moved on and made other more sustainable strides. An athleisure brand called the Girlfriend Collective has been putting out leggings made from recycled plastic bottles, 25 to be precise. They have also launched a collection made from recycled fishing nets, contributing to corporations utilizing recycled ocean plastic. Another attempt to deal with ocean pollution was made by the U.K. based company called GANT. They started the "Beacons Project", which had them team up with Mediterranean fishermen. The plastic caught by these fishermen while fishing was collected to upcycle into button-down shirts. IV. OUR LEARNINGS Studying various global collaborations helped us analyze their supply chains and the problems some faced while dealing with ocean plastic. The solid conclusion drawn from our analysis seems to be that such a setup is possible when companies and the government consciously choose to put in the effort needed. The initial efforts should be implementing a plastic collection drive that is not a onetime expedition but a more regulated and periodically carried out task. If the supply chain has to be set up within the Indian Subcontinent and the target location for marine plastic sourcing is the Indian Ocean Garbage Patch, then the first step would be to monitor where the debris hotspots are. The Sea Education Association (SEA) has been researching garbage patches for 25 years. Mathematical and Physical models of these garbage patches can be drawn up to estimate the hotspot's location. Using one such model, called the Maximenko model, a paper estimated that the amount of plastic debris in the Indian Ocean was equal to 2185 tons [17]. Once a high concentration spot is found, a system, such as the one suggested in the Ocean Cleanup project, can be set up. This would ease out the plastic collection process, and we would not have to depend on manual labour. We could even implement the ideology used in the Gant project and collect all the plastic caught in the fishing nets of the fishermen in India instead of having them throw it back in the waterways. After the collection step, we would have to focus on transporting plastic waste to the processing plant in the most effective way, one with the most diminutive carbon footprint. We could take inspiration from the Adidas x Parley collaboration and set up plants near the ocean, where the plastic collected can be compressed into blocks and make transportation more accessible. After large quantities are transported effectively in one trip, our focus would shift to processing the collected plastic. Many of the plastic processing plants are not equipped with the technology needed to process ocean plastic which is often contaminated. Therefore a wise step would be to fund research and instate the needed chemical treatment machines. Crossing the barrier of ocean contamination will lead us to the essential part of the production process: recycling. As all the above studies have shown us, the plastic is first cleaned and broken into shards. These shards are then processed in different ways to form fine yarn by spinning or to form soles of shoes through the chemical process of nucleation in polymers. A similar procedure can be followed for any endeavour that may be calibrated to produce fashion from ocean plastic in India. We have included a flowchart based on our learnings. V. CONCLUSION It is essential to understand that the fashion industry is a massive contributor to the plastic waste found in the oceans. With microfibers accumulating with every wash to the plastic waste collected during production, all aspects of the fashion industry are, to a significant extent, responsible for the plastic pollution described in the chapters above. As a result, the fashion industry needs to take up greener practices and walk down a more sustainable path. This thought led to form the basis of our paper on how sustainable supply chains should be set up to collect plastic waste from oceans to turn them into clothing apparel. In our paper, we have successfully suggested setting up a sustainable chain in the form of a flowchart and have studied various collaborations and efforts on the part of global corporations to create a greener and more environment-friendly fashion industry.
2021-11-04T15:21:30.284Z
2021-10-31T00:00:00.000
{ "year": 2021, "sha1": "1a0c972517d93503e7408ffa9eb5e2a12e665465", "oa_license": null, "oa_url": "https://doi.org/10.22214/ijraset.2021.38724", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5a828b470138b71c68a2048644a926f78abf5a78", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
251272150
pes2o/s2orc
v3-fos-license
An Innovative Technique of Testicular Preservation in Fournier’s Gangrene: Surgical Details and Illustration Fournier’s gangrene, which is a necrotizing fasciitis of the perineal region, requires prompt control of infection with emergent surgical debridement. The shameful exposure of gonads, which occurs following debridement, can cause both physiological and psychological impairment to the patient. These can be avoided by the use of this novel technique for testicular preservation. Following debridement of necrotic scrotal skin, this technique involves creation of inguinal pouch by blunt dissection and placement of the testes in the pouch created. Once healthy granulation tissue is achieved in the scrotal wound, closure of the scrotum is performed after bringing down the testes. The advantages of this technique include development of a relatively physiological position to preserve the testes before definitive reconstruction of the scrotum and the easy reproducibility of the technique. A holistic approach to management of Fournier’s gangrene should include resuscitation, administration of antibiotics, debridement, and scrotal reconstruction. However, the psychological impact of shameful exposure of the gonads must also be borne in mind during the management. Our technique represents one of the ways to reduce the stigma and discomfort associated with shameful exposure of the testes. Introduction Fournier's gangrene, which is a necrotizing fasciitis of the perineal region, has always posed a unique and stressful problem to both the patient and the surgeon. This spreading gangrene of the scrotal tissues can involve the genital, perianal, and surrounding tissues of the thigh or abdominal wall, requiring rapid resuscitation, debridement, and administration of appropriate antibiotics [1]. The resultant defect of the scrotal wall often leads to shameful exposure of the testes, which creates a distinctive psychological stress to the patient including aesthetic complications and physiological stress to the testes. This paper describes the surgical details with illustrations of a novel technique of testicular preservation following extensive debridement for Fournier's gangrene by placing the testes in a surgically created inguinal pouch, thereby preventing exposure of healthy gonads to the environment. Description of the technique The management of Fournier's gangrene essentially starts with adequate resuscitation with intravenous fluids and antibiotics along with emergent surgical debridement. The steps of this innovative technique of inguinal pouch creation for Fournier's gangrene management include the following. Step 1: Debridement of Necrotic Scrotal Skin Extensive debridement of necrotic scrotal skin and surrounding tissues is to be performed, ensuring healthy margins with active bleeding (Figure 1). Testes are usually found to be healthy and preserved. After removal of necrotic tissues, thorough wash with saline is performed. FIGURE 1: Exposed testes following debridement The testes are shamefully exposed after removal of necrotic scrotal skin. Step 2: Creation of Inguinal Pouch After primary debridement and thorough washing, space is created in the inguinal region by blunt dissection. Two retractors are placed at the level of superficial inguinal ring to expose the inguinal canal. Adequate space is created in the bilateral inguinal canal, anterior to the inguinal cord, with blunt finger dissection or with the use of a gauze piece held at the tip of an artery forceps or a sponge holder (Figures 2, 3). FIGURE 5: Placement of the testis in the left inguinal pouch The left testis is gently pushed manually into the inguinal pouch created. Step 4: Repeat Debridement The scrotal wound is dressed regularly. Various options, such as simple saline dressings and application of negative-pressure wound therapy (NPWT) device, are utilized to help in healthy granulation and healing of the scrotal wound. The patient is taken up for repeated surgical debridement when needed. During redebridement, the testes are also brought down from the pouch and washed with saline. They are then placed back in the inguinal pouch. Step 5: Closure of the Scrotum Once healthy granulation tissue appears in the residual scrotal wound, the testes can be brought down to the scrotum. After mobilizing the residual scrotal skin, the scrotum is closed. However, in a few cases, skin grafting may be required to close the defect. Discussion Fournier's gangrene refers to a necrotizing fasciitis of the perineal region involving the genital, perianal, and other surrounding tissues [2]. It is a polymicrobial infection caused by both aerobes and anaerobes, which causes extensive tissue damage and subcutaneous vessel thrombosis and eventually gangrene. Though initial description by Jean Alfred Fournier in 1883 attributed this spreading scrotal infection to be idiopathic [3], further study into the etiology of this condition has revealed three main causes. These include urogenital causes such as perurethral catheterization, urethral stricture, vasectomy, prostate biopsies, epididymitis, and chronic urinary tract infection; anorectal causes such as hemorrhoid surgeries, colorectal malignancies, anal intercourse, or local trauma; and dermatological causes such as animal or human bites, burns, piercings, and injections. The commonly implicated organisms include aerobes such as Escherichia coli, Staphylococcus, and Klebsiella, and anaerobes such as Bacteroides. They are frequently seen in elderly males with comorbidities; however, young males, females, and children are not any exception to this disease [4]. Diabetes mellitus, chronic alcoholism, and obesity are some of the commonly associated comorbidities. The use of various scores such as Fournier's gangrene severity index has been described to predict the mortality in these cases, based on the degree of derangement of physiology [5]. In various series, the reported mortality rate ranges from 3% to 67% [6]. The management of Fournier's gangrene needs to be prompt and emergent. Immediate resuscitation with intravenous fluids to correct dehydration and shock, administration of a combination of broad-spectrum antibiotics such as third-generation cephalosporins, aminoglycosides, metronidazole, or clindamycin [7], and correction of acidosis and deranged blood sugars in case of diabetics form a pivotal role in the initial management. Urgent surgical debridement to remove the source of sepsis is the definitive treatment in such cases [8]. Following surgical debridement, the surgeon is faced with a problem of shameful exposure of the testes and inadequate tissue to cover the exposed gonads. Active infective process precludes the chance of primary closure of wound at the same sitting as debridement. Thus, a temporizing measure to preserve the gonads is warranted in these patients. Methods such as placement of the testes in thigh pouches have been described in existing literature [9]. Our study is unique in describing a novel technique of creation of the inguinal pouch for testicular preservation following debridement for Fournier's gangrene. We applied this technique on 15 patients with Fournier's gangrene. They were managed with appropriate antibiotics, immediate surgical debridement, and primary preservation of the testes in the inguinal pouch created (Figures 6A-6D). Following primary debridement, saline dressings were done regularly. Three patients also required NPWT. The testes could be brought down from the pouch, inspected, washed, and placed back during repeat debridements ( Figure 7A). The number of re-debridements ranged from two to eight. The process was repeated until the wound appeared healthy with granulation tissue. In all our patients, the defect could be closed primarily after bringing the testes down by mobilizing the remaining scrotal tissue ( Figure 7B). No complications such as local wound infection or testicular atrophy were documented following use of this technique. The use of an inguinal pouch to preserve the testes during the interim period was found to cause less physical and psychological trauma than when the testes were exposed. It was also quite simple to perform. The choice of dressing following initial surgical procedure can vary from simple saline dressings to NPWT device. The use of NPWT for fastening the healing process is well-recognized and is attributed to improved angiogenesis [6]. After healthy granulation tissue develops, a plan for reconstruction of the scrotum is considered. Various reconstruction techniques after replacement of the testes back to the scrotum have been used, such as delayed primary closure of the scrotal skin [10], and the use of local scrotal advancement flap, splitthickness skin graft, superomedial thigh flap, pudendal thigh flap, medial circumflex artery perforator flap, and gracilis flap [6]. Use of tissue expanders to mobilize the redundant scrotal skin locally has also been described [11]. Due to the laxity of the scrotal skin, a delayed primary closure is possible in most cases after repositing the testes to the scrotum, as observed in our study. Overall, the advantages of this new technique include development of a relatively physiological position to preserve the testes before definitive reconstruction of the scrotum and the easy reproducibility of the technique. It also facilitates easy placement of the NPWT system, which might be problematic when the testes are exposed. The drawback includes the lack of randomized trials to understand the outcomes of this technique in comparison to the existing ones. Even though a theoretical risk of spread of infection to the unaffected inguinal region exists, none were observed in our patients. However, studies with a greater sample size would help to ascertain this outcome. Conclusions Fournier's gangrene is a surgical emergency that requires prompt resuscitation and debridement to control sepsis and prevent mortality. This technique of creation of an inguinal pouch for testicular preservation in case of Fournier's gangrene is a novel and feasible approach to avoid shameful exposure of the gonads. An array of reconstruction techniques is available to shape the remaining scrotal skin to accommodate the testes. Adequate caution and prompt care, bearing in mind the psychological impact of this disease on the patient, completes the holistic approach to the management of Fournier's gangrene. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-08-03T15:04:36.134Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "6ff43b9dde4bfada370e0a1214bc3d847fc99caf", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/106575-an-innovative-technique-of-testicular-preservation-in-fourniers-gangrene-surgical-details-and-illustration.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4fca9117d13786d895017ebcd80886f954632b33", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
84318610
pes2o/s2orc
v3-fos-license
Assessing Drought Tolerance of Snap Bean (Phaseolus Vulgaris) From Genotypic Differences In Leaf Water Relations,Shoot Growth and Photosynthetic Parameters Abstract The Leaf Water Relations, Photosynthetic Parameters and Shoot Growth of Five Snap Bean Cultivars Were Assessed During The Drought Period To Determine Their Role In Alleviating Plant Water Deficit Imposed By Withholding Irrigation At Flowering. Soil Water Content of Irrigated Plants Was 18-20% While That of Unirrigated Plants Was 6-10% At 60 Days After Seeding (Das). Leaf Water Potential Was Approximately 0.15Mpa Lower and Relative Water Content Was Approximately 5% Lower In Unirrigated Plants Than In Irrigated Plants At 57 Das. Unirrigated Plants Had A Lower Stomatal Conductance (Gs) and Intercellular Co2 Concentration (Ci). Reduced Leaf Water Potential and Relative Water Content Were Associated With A Decreased Stem Elongation Rate. Plants With A Lower Stem Elongation Rate Had A Higher Specific Leaf Weight and Succulence Index (Suci). Significant Differences Among Five Cultivars of Snap Bean Were Found For All Parameters Measured. Decreased Leaf Water Potential and Stem Elongation Rate Resulting From Drought Participated In Preserving Relative Water Content and Improving Specific Leaf Weight and Suci. Maintenance of Higher Relative Water Content Increased Gs and Ci. Cultivars That Maintained A High Relative Water Content When Leaf Water Potential and Stem Elongation Rate Were Decreased Markedly, Were More Tolerant To Drought Than Those Which A Reduced Relative Water Content and The Leaf Water Potential and Stem Elongation Rate Were Only Slightly Lowered. Reduced Yield (Pods Per Plant and Seed Biomass) Resulting From Drought Was Associated With Reduced Relative Water Content. The productivity of kidney bean and snap bean (Phaseolus vulgaris L.) is drastically reduced in the summer season on the subtropical islands of Japan. One of the main reasons for the decreased productivity is the decrease in tissue water content due to excessive water loss through rapid transpiration caused by high temperature and water deficit (Omae et al., 2004aKumar et al., 2005). Under water stress conditions, snap bean continues vegetative growth, but only reduced at photosynthetic rates (Suzuki et al., 1987). In many crop species, including snap bean, even small diurnal fluctuations in leaf water status at anthesis can adversely affect the activity of reproduction structures (Saini and Aspinall, 1982;Kuo et al., 1988;Weaver and Timm, 1988;Tsukaguchi et al., 2003). Therefore, lack of photosynthate supply to reproductive organ may not be the only cause of premature abortion and abscission. However, this issue has not been investigated well in snap bean. Some plants can maintain photosynthetic rates at low leaf water status by changing in leaf anatomical characteristics and CO 2 conductance (Evans et al., 1994). Recently, Omae et al. (2005) reported that genotypic differences in leaf water status of snap bean correlated with crop productivity under drought conditions. This suggests that differences in leaf water status exist among snap bean cultivars, which may be linked to the drought tolerance mechanisms. However, Omae et al. (2005) did not assess how drought influenced photosynthetic parameters or vegetative growth. In a glasshouse study, water uptake in snap bean under different temperatures conditions related to shoot extension rate (Omae et al., 2004b). Understanding the influence of drought on leaf water relations in relation to photosynthetic parameters and growth under field conditions is crucial for identifying causes of yield reduction and underlying mechanism of drought tolerance in snap bean. The objectives of this study were (1) to evaluate genotypic differences in leaf water relations, photosynthesis, and shoot growth of snap bean under water stress, and (2) to determine which, if any, of the measured parameters could be useful for screening snap bean germplasm for drought tolerance. Materials and Methods This study was conducted at the Okinawa Subtropical Station, Japan International Research Center for Agricultural Sciences (JIRCAS), Ishigaki Island, Japan. Snap bean cultivars Kentucky Wonder, Haibushi and Kurodane Kinugasa, and strains Ishigaki-2 and 92783 (hereafter referred to as "cultivars") were planted on 29 Nov. 2004 in field beds in a net house (5 m × 20 m, covered with white cheesecloth) under natural conditions. The net house was covered with a polyethylene sheet on the top. There were four rows composed of five plots in each row in the net house. The plot size was 2.5 m 2 (2.5 m in row length and 1.0 m in width) for all genotypes. The soil was redyellow podzolic highly acidic soil (pH 4.6) with a fine to medium texture. For unirrigated plots, a fibrous polyethylene sheet (1-mm thick) was buried before seeding in 60 cm soil depth to restrict roots not to grow in deeper soil layers but to allow drainage of excess water. Ten plants of each genotype spaced 25 cm apart were grown in a row in each plot. Two irrigation treatments, irrigated and unirrigated were imposed. Irrigated plants were drip-irrigated regularly while unirrigated plants received no irrigation after the flowering stage (42 days after seeding, hereafter referred to as DAS). Volumetric soil water content was measured with a portable soil water content m e a s u r e m e n t s y s t e m ( H y d r o s e n s e , C a m p b e l l Scientific, Inc., N. Logan, UT, USA) connected with a 20 cm probe rod. Leaf water relations Leaf water relations were measured at 57 DAS. After measurement of photosynthetic parameters, the same leaves were used to evaluate leaf water relations between 1100-1200 h. Leaf water potential, relative water content and osmotic potential were measured in the same trifoliate leaf, middle lamina for leaf water potential and side lamina for relative water content and osmotic potential. Leaf water potential was measured by the pressure chamber method (Scholander et al., 1965). For determination of osmotic potential, lamina tissues (without midrib vein) were placed in a 5 ml syringe barrel. The syringe was transferred to −20ºC until measurement. Osmotic potential was determined after thawing the frozen samples on ice for 1 h. The sap was expressed, and osmotic potential was measured on 10 µL aliquots placed in an osmometer (Model 5520, Wescor Inc., USA). Relative water content (RWC) was estimated according to the equation where, M f , M d and M s are the fresh, oven-dried and water-saturated mass of the leaf discs, respectively. A sharp cork borer was used to collect eight leaf discs 12 mm in diameter, avoiding the mid-rib and major veins. M s was determined after the leaf discs were floated on distilled water for more than four hours in darkness. Leaf disc samples were dried in an oven at 65ºC for eight hours to record M d . Leaf anatomical characteristics The data recorded to estimate relative water content were also used to calculate specific leaf weight (SLW) and succulence index (SucI) according to the following equations SLW = M d ⁄ L a and SucI = (M f −M d ) ⁄ L a , where, L a is total leaf area sampled. Shoot growth The stem elongation rate was recorded between 45 and 60 DAS. Measurements were made about the same time on every occasion. Three main shoots per cultivar per plot were marked with a tag at 4 or 5th internode from the top of the shoot. Measurements of length using a meter rule were made from the base of the tagged internode to the top of the tagged shoot at an interval of three to four days. The length of each internode in the shoot was recorded separately at each time of measurement. Stem elongation rate (SER) was calculated according to the following equation: where, L 1 is the sum of the lengths of internodes at the beginning and L 2 is the sum of the length of internodes at the end of a time interval, t. Photosynthetic parameters Photosynthetic parameters were measured at 57 DAS on the youngest fully expanded leaf (4 or 5th from the top) of the plants in each treatment between 1000-1100 h. A portable photosynthesis system (LI-6400, LI-COR, Lincoln, Nebraska, USA) was used to measure the photosynthetic parameters such as net photosynthetic rate (P n ), stomatal conductance (g s ), transpiration rate (E), intercellular CO 2 concentration (C i ) and vapor pressure deficit (VpdL). Photon flux density (PAR, 400-700 nm wave length) was recorded with a gallium arsenide phosphide PAR sensor mounted on the leaf chamber. Transpiration rate was calculated by the following equation: where, E, F, W s , W r and S are transpiration rate (mol m -2 s -1 ), air flow rate (µmols -1 ), sample and reference water mole fractions (mmol H 2 O (mol air) -1 ), and leaf area (cm -2 ), respectively. Ambient air temperature and relative humidity were recorded automatically with a temperature humidity logger (Model SK-L200TH, Sato Keiryoki, Mfg. Co. Ltd., Japan). Number of pods and seed yield All mature pods in each plot were harvested at two weeks to determine the number of pods per plot. Pods in each plot were sun dried for a week and threshed for seed yield. All values of the number of pods and seed yield per plot were converted to per plant by dividing with the number of plants per plot. The number of seeds per pod and seed weight were measured in 20 pods and 20 seeds, respectively in each plot. Experimental design and statistical analysis The experiment was conducted according to two factorial designs, irrigation and cultivar. Two different levels of irrigation, irrigated and unirrigated treatment, were designed with two replications for measurement of soil water content, yield and yieldattributes, and without replication for leaf water status, stem elongation rate and photosynthetic parameters. Five cultivars were planted in each irrigated and unirrigated plot. For leaf water status, stem elongation rate and photosynthetic parameters, data were taken from three plants in each plot. The mean values in each plot were statistically compared by Student's t-test (n=3) using JMP software (Ver.5.0, SAS Instute, Japan) program. The ambient air temperature and relative humidity were recorded every one hr and averaged. The data for soil water content (0-20cm in depth) were taken from 4 spots in each plot, averaged and regarded as one replication. Two replicated data were used for the analysis. A discriminant analysis based on Mahalanobis distance (Duda et al., 2001) was performed to recognize the differential pattern and to classify the five cultivars into closer categories. The differences between the two irrigation levels were significant (P < 0.01, Student's t-test) except at the beginning, i.e. at 42 DAS. Differences among cultivars were not significant either in the drought treatments or on the days of measurement. On an average, soil Table 1. Effect of water stress on leaf water potential (LWP), osmotic potential (OP), relative water content (RWC), specific leaf weight (SLW) and succulence index (Sucl), Data were collected 57 days after seeding (DAS) between 1100-1200h. water content decreased by about 50% relative to irrigated treatment in unirrigated treatment after the flowering at the end of experiment (60 DAS). Effect on leaf water The effect of drought on leaf water status varied with cultivars and parameters measured ( Table 1). All parameters of water status including leaf water potential, osmotic water potential and relative water content were lower in unirrigated than in irrigated plants in all cultivars. Cultivar Haibushi showed lowest leaf water potential and osmotic potential in both irrigation treatments. All cultivars showed similar relative water content in each irrigation treatment. Leaf anatomical characteristics The cultivar Haibushi had the highest specific leaf weight in both irrigation treatments ( Table 1). SucI of plants was not influenced by irrigation treatment. The cultivars Haibushi and Ishigaki-2 had the greatest SucI in both irrigation treatments. Effect on photosynthetic parameters The effect of drought was significant in C i , g s , E and Means followed by different letter (s) in a column are significant at P < 0.05 (Student t-test, n=3). DAS, days after seeding. Table 2. Effect of water stress on photosynthetic (P n ), intercellular CO 2 concentration (C i ), stomatal conductance (g s ), transpiration rate (E) and vapor pressure deficit (VpdL). Measurements were taken 57 days after seeding (DAS) between 1100-1100h. Genotype P n (µmol CO 2 m -2 s -1 ) C i (µmol CO 2 mol -1 ) VpdL in cultivar Kentucky Wonder and 92783 ( Table 2). Both cultivars had higher C i , g s , E, and lower VpdL in irrigated than in unirrigated treatment. The cultivar Haibushi showed highest P n , g s and E among all cultivars in both irrigation treatments. In unirrigated treatment, cultivars Haibushi and Ishigaki-2 were significantly higher in C i and g s than Kentucky Wonder and 92783, while Kentucky Wonder and 92783 showed higher VpdL than other cultivars in unirrigated treatment. Effect on shoot growth Stem elongation rate at all growth stages significantly differed with the cultivar except that at 45-49 DAS in irrigated treatment ( Association of leaf water status with photosynthetic parameters and stem elongation rate Photosynthetic parameters were not significantly correlated with leaf water potential (R 2 =0.01 and 0.02 in g s and C i , respectively), but relative water content was positively correlated with g s and C i (Fig. 2). Although absolute values of stem elongation rate were not associated with leaf water status, significant correlation was found between reduction in stem elongation rate with reduction in leaf water potential and relative water content {100-(ratio of values in unirrigated to irrigated treatments expressed as percentage)}, positively with reduction in leaf water potential (Fig. 3a), whereas, negatively with reduction in relative water content (Fig. 3b). Pattern recognition by discriminant analysis classified cultivars Haibushi, Ishigaki-2 and Kurodane Kinugasa in one category and Kentucky Wonder and 92783 in another category. Fig. 4 shows the correlation of stem elongation rate with specific leaf weight and SucI. The stem elongation rate correlated negatively with specific leaf weight (Fig. 4a) and SucI (Fig. 4b). Number of pods per plant and seed yield, and their relationship with relative water content Effect of irrigation was not significant in any cultivars except for the number of pods/plant in 92783 and seed weight in Haibushi ( Table 4). The number of pods/plant significantly decreased due to water stress in 92783. Fig. 5 shows the relationships between reduction in relative water content and reduction in the number of pods and seed yield per plant. The cultivars that exhibited small reduction in relative water content in response to water stress showed a small reduction in pods and seed yield. Discussion The five snap bean cultivars displayed distinct r e s p o n s e s t o a p r o l o n g e d d r o u g h t . D r o u g h t significantly decreased soil water content in the Table 4. Effect of irrigation levels and cultivars on yield-attribures and seed yield. Water stress affected leaf water status, photosynthetic parameters and shoot growth in some cultivars. Cultivar Haibushi, Kurodane Kinugasa and Ishigaki-2 showed a larger drop in leaf water potential (Table 1) than the other cultivars. However, there were no differences in relative water content among the cultivars. A steeper leaf water potential gradient from soil to plant may enhance ability of the plants to extract soil water at low soil water content (Coyne et al., 1982). Omae et al. (2005) reported that genotypic differences in pod setting and seed yield in snap bean were associated with the differences in maintenance of midday relative water content. Therefore, the maintenance of relative water content with larger decreases in leaf water potential in some cultivars might relate with their water-absorbing ability and contribute to less reduction in seed yield (Fig. 5). Reduction in leaf water potential due to water stress was linearly correlated with the reduction in stem elongation rate. A discriminant analysis revealed that the five cultivars displayed two distinct types of responses (Fig. 3a). One group, which included cultivars Haibushi, Ishigaki-2 and Kurodane Kinugasa, showed large reductions (17-20%) in both stem elongation rate and leaf water potential (Fig. 3a), and they less reduced seed yield than other cultivars (Fig. 5). Conversely, Kentucky Wonder and 92783 showed a larger reduction in relative water content compared to cultivars in the other group (Fig. 3b). The reduction in shoot growth due to drought may be related to the water-holding ability under drought such as specific leaf weight and SucI (Fig. 4), which are important leaf anatomical characteristics for restricting leaf water loss through cuticle transpiration under drought conditions. Osmotic adjustment also enables the plants to maintain higher tissue water content, turgor and turgor-related processes during water deficit as reported in many crop species (Morgan et al., 1986;Ritchie et al., 1990;Kumar and Singh, 1998). In this study, there was a small decrease in osmotic potential, -1.0 to -0.87 MPa with a larger decrease in leaf water potential, -0.92 to -0.65 MPa (Table 1) indicating limited or no osmotic adjustment. During the reproductive phase, maintenance of higher relative water content with decreasing leaf water potential due to drought plays an important role in terms of higher pod setting, pod retention and seed yield in snap bean . It was also demonstrated that Haibushi, a heat-tolerant cultivar maintained higher relative water content with decreasing leaf water potential than Kentucky Wonder, a heatsensitive cultivar when exposed to different degrees of high temperature . The results of this study displayed that relative water content was positively correlated with photosynthetic parameters such as C i and g s (Fig. 2). Sinclair and Ludlow (1985) also reported that photosynthesis, protein synthesis, NO 3 reduction, and leaf senescence are better correlated with changes in tissue water content than with leaf water potential. The reduction in relative water content was strongly and linearly correlated with the reduction in the number of pods per plant and seed yield (Fig. 5a, b), indicating that cultivars with a smaller reduction in relative water content showed a smaller reduction in number of pods and seed yield due to drought stress. Therefore, when we evaluate "the drought tolerance" as less reduction in yield when plants were exposed to drought, cultivars Ishigaki-2 and Haibushi can be recognized to have higher drought tolerance compared to Kentucky Wonder and 92783. On the other hand, Kurodane Kinugasa showed unique phenomenon, which was classified as same group with Haibushi and Ishigaki (Fig. 3) according to the relationship between stem elongation rate and water status, but showed intermediate reduction in the number of pods and seed yield among the cultivars (Fig. 5). Further study will be necessary to clarify the physiological mismatch between the classification of the cultivars (Fig. 3) and the results of seed yield (Fig. 5) in Kurodane Kinugasa. In conclusion, imposition of drought decreased soil water content which adversely affected leaf water status and stem elongation. A smaller reduction in relative water content was displayed by cultivars that showed a larger reduction in stem elongation rate and leaf water potential. Leaf anatomical characteristics, specific leaf weight and SucI were improved by a decrease in stem elongation rate under unirrigated conditions. Relative water content was well correlated with C i and g s . Most importantly, the cultivars with a smaller reduction in relative water content also displayed a smaller reduction in the number of pods per plant and seed yield.
2019-03-21T13:06:09.843Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "d5b2c939e2b688fae39458e2d37a635faee9b841", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1626/pps.10.28", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "e4acda466812bcc703683492a443b957a3deddf8", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
34867086
pes2o/s2orc
v3-fos-license
A Family of Ca2+-Dependent Activator Proteins for Secretion Ca2+-dependent activator protein for secretion (CAPS) 1 is an essential cytosolic component of the protein machinery involved in large dense-core vesicle (LDCV) exocytosis and in the secretion of a subset of neurotransmitters. In the present study, we report the identification, cloning, and comparative characterization of a second mammalian CAPS isoform, CAPS2. The structure of CAPS2 and its function in LDCV exocytosis from PC12 cells are very similar to those of CAPS1. Both isoforms are strongly expressed in neuroendocrine cells and in the brain. In subcellular fractions of the brain, both CAPS isoforms are enriched in synaptic cytosol fractions and also present on vesicular fractions. In contrast to CAPS1, which is expressed almost exclusively in brain and neuroendocrine tissues, CAPS2 is also expressed in lung, liver, and testis. Within the brain, CAPS2 expression seems to be restricted to certain brain regions and cell populations, whereas CAPS1 expression is strong in all neurons. During development, CAPS2 expression is constant between embryonic day 10 and postnatal day 60, whereas CAPS1 expression is very low before birth and increases after postnatal day 0 to reach a plateau at postnatal day 21. Light microscopic data indicate that both CAPS isoforms are specifically enriched in synaptic terminals. Ultrastructural analyses show that CAPS1 is specifically localized to glutamatergic nerve terminals. We conclude that at the functional level, CAPS2 is largely redundant with CAPS1. Differences in the spatial and temporal expression patterns of the two CAPS isoforms most likely reflect as yet unidentified subtle functional differences required in particular cell types or during a particular developmental period. The abundance of CAPS proteins in synaptic terminals indicates that they may also be important for neuronal functions that are not exclusively related to LDCV exocytosis. Regulated secretion of neurotransmitters, hormones, or peptides from neurons and neuroendocrine cells is mediated by the Ca 2ϩ -dependent fusion of secretory vesicles with the plasma membrane (1)(2)(3). Two main types of secretory vesicles, small clear (SCV) 1 and large dense-core (LDCV) vesicles, are responsible for the secretion of classical neurotransmitters and peptides/neuromodulators, respectively. Despite some striking differences between SCVs and LDCVs with respect to their structure, release kinetics, and recycling, the two vesicle types employ a very similar set of proteins for the regulation and execution of their Ca 2ϩ -regulated fusion with the plasma membrane. Such conserved components of the secretory machinery of SCVs and LDCVs include (i) the SNARE complex components synaptobrevin, SNAP-25, and syntaxin, which mediate the fusion reaction, (ii) the putative exocytotic Ca 2ϩ -sensor synaptotagmin, (iii) the SNARE complex regulators soluble NSF-attachment protein and NSF, or (iv) the syntaxin regulator Munc18 (1, 4 -6). In contrast to the considerable number of proteins that function in both the SCV and LDCV secretory pathways, only very few proteins were proposed to be more or less specifically involved in the secretion of only one type of vesicle. One such protein is CAPS1, which was discovered as an essential cytosolic factor in Ca 2ϩ -triggered norepinephrine release from cracked PC12 cells where it is required for a secretory step that follows ATP-dependent priming (7)(8)(9). CAPS1 is a 145-kDa protein that contains a central PH domain whose binding to acidic phospholipids in the plasma membrane is essential for CAPS1 function (10), an MH domain of unknown function which is also found in members of the Munc13 family of vesiclepriming proteins (11), and a C-terminal membrane-association domain that mediates LDCV binding (10). In addition, a C 2 domain precedes the central PH domain (10). Not all aspartic acid residues essential for Ca 2ϩ binding to C 2 domains are conserved in the CAPS1 C 2 domain, indicating that this domain may not serve as a Ca 2ϩ sensor. Genetic studies on the invertebrate orthologues of CAPS1 in Caenorhabditis elegans (12) and Drosophila melanogaster (5) demonstrated an important role of CAPS isoforms in the secretion of a subset of neurotransmitters. Antibody inhibition studies in chromaffin cells (13) and melanotrophs (14) showed that CAPS1 is required for the release of LDCVs that are rapidly releasable and tightly coupled to Ca 2ϩ influx. A specific function of CAPS1 for LDCV exocytosis was deduced from the findings that norepinephrine but not glutamate release from permeabilized synaptosomes is dependent on CAPS1 (15) and that membranebound CAPS1 in brain homogenates is associated with LDCVs but not with synaptic vesicles (16). Currently, the molecular mechanism of CAPS1 function in LDCV secretion is unknown. In the present study, we report the identification, cloning, and comparative characterization of a second mammalian CAPS isoform, CAPS2. Our data demonstrate that CAPS isoforms in higher mammals compose a family of two structurally and functionally highly related proteins with striking differences in their spatial and temporal expression patterns. MATERIALS AND METHODS Cloning of Human CAPS2 cDNA-Protein profile searches for proteins containing MHD domains identified a murine cDNA fragment (GenBank TM /EBI accession no. AF000969) with high homology to rat CAPS1 (11). Subsequent searches in human and murine genome databases (Celera) showed that the initially identified murine cDNA fragment corresponded to an as yet unidentified second CAPS gene, CAPS2. The complete coding sequences of murine and human CAPS2 were assembled from genomic data base sequences (Celera; GA_x6K02T2P3E9, murine; GA_x5YUV32W188, human). Based on this sequence information, the human CAPS2 cDNA was amplified by PCR from a human brain cDNA preparation (Clontech), cloned into pCRII-TOPO (Invitrogen), and sequenced using the dideoxy chain termination method with dye terminators on an ABS 373 DNA sequencer (Applied Biosystems). A eukaryotic expression vector encoding fulllength human CAPS2 (pcDNA3-hCAPS2) was generated in pcDNA3 (Invitrogen). A corresponding full-length rat CAPS1 expression vector (pcDNA3-rCAPS1) was generated from a published CAPS1 plasmid (8). Multiple sequence alignments were calculated with Lasergene (DNAS-TAR) and improved manually. In Situ Hybridization-In situ hybridization experiments with mouse brain and adrenal gland sections were performed as described previously (17). Antisense oligonucleotides representing the following sequences were chosen as probes: bp 1696 -1740 of mouse CAPS1 cDNA (GenBank TM /EBI accession no. NM_012061) and bp 461-505 of mouse CAPS2 cDNA (GenBank TM /EBI accession no. AF000969). Images were captured with a Camedia C-3030 Zoom digital camera (Olympus). Generation of Antibodies Against CAPS1 and CAPS2-Recombinant GST-rCAPS1 (218 -390) fusion protein was generated by using the expression plasmid pGEX-rCAPS1 (218 -390), which encodes rat CAPS1 residues 218 -390 in frame with GST (18). A polyclonal antibody directed against rat CAPS1 was generated with GST-rCAPS1 (218 -390) fusion protein as the antigen. The antiserum was affinity-purified on GST-rCAPS1 (218 -390) and immobilized on an Immobilon-P polyvinylidene fluoride membrane (Millipore) (19). The antibody was monospecific for CAPS1 and cross-reacted with the murine orthologue, as demonstrated by Western blots with wild-type and CAPS1-deficient brain tissue from a deletion mutant mouse. 2 A polyclonal antibody directed against murine CAPS2 was raised to a peptide with the sequence H 2 N-CQKLKRSQNSAFLD-CONH 2 (Eurogentec), which corresponds to residues 309 -321 of mouse CAPS2 and is completely conserved in human CAPS2. For affinity purification of the antibody, 10 mg of peptide antigen were conjugated to 0.5 g of thiopropyl-Sepharose 6B (Amersham Biosciences). The peptide-conjugated resin was incubated with 5 ml of antiserum and then washed extensively, first with 0.1 M Tris (pH 8.0), followed by 0.5 M NaCl/0.1 M Tris (pH 8.0) and 0.01 M Tris (pH 8.0). Bound antibodies were eluted with 0.2 M glycine (pH 2.7)/0.1 M NaCl, followed by 0.2 M glycine (pH 2.3)/0.1 M NaCl, and immediately neutralized with Tris-HCl. Two ml of each eluate were tested on Western blots. The fraction with the highest titer and the lowest cross reactivity was then concentrated with Centricon 30 (Amicon). Immunocytochemistry-Animals were deeply anesthetized with tribromoethanol. After a brief rinse with normal saline (50 -100 ml), each rat was perfused transcardially with 300 ml of cold 4% paraformaldehyde in PBS (pH 7.4) for 10 min. The brains were quickly removed, placed in 20% sucrose/0.02 M potassium PBS for cryoprotection, and frozen on dry ice. Sagittal 10-m frozen cryostat sections were collected and blocked in PBS containing 2% normal goat serum and 0.25% Triton X-100 for 30 min at room temperature. Thereafter, sections were incubated with polyclonal antibodies to CAPS1 (1:2000) or CAPS2 (1:2000) and monoclonal antibodies to GAD65 (1:1000, Chemicon) or synaptophysin (1:500) at 4°C for 16 h. The antibodies were diluted in PBS that contained 2% normal goat serum and 0.25% Triton X-100. After 3 rinses in PBS, the sections were incubated with Alexa488-or Alexa568-labeled goat anti-rabbit or anti-mouse IgG secondary antibodies (Vector Laboratories) at room temperature. The sections were then rinsed in PBS and mounted with 1.5% N-propyl gallate. Digital images were captured with an LSM510 laser scanning microscope (Zeiss) and analyzed with the LSM510 software. Ultrastructural Analysis-Adult male rats were perfused with saline, followed by 4% paraformaldehyde/0.1% glutaraldehyde in 0.1 M phosphate buffer. The brains were taken out, and 40-m-thick vibratome sections were collected and stained for CAPS1 as indicated above, except that a biotinylated secondary antibody was used (1:300, Vector Laboratories), followed by an avidin-biotin-peroxidase amplification (Vectastain ABC, Vector Laboratories), a 3,3Ј-diaminobenzidine labeling reaction, and silver intensification (24). Sections were extensively washed, osmicated for 1 h (1% OsO 4 in phosphate buffer), dehydrated through a graded series of ethanol and propylene oxide, and embedded in Durcupan (Durcupan ACM, Fluka) by a 48-h polymerization at 60°C. Light-gold ultra-thin sections were cut, contrasted with uranyl acetate and lead citrate, and observed in a LEO 912AB transmission electron microscope (Zeiss). Digital images were captured with a ProScan CCD camera and analyzed with the Analysis version 3.2 software (Soft Imaging System). Preparation of PC12 Cell Ghosts and Measurement of [ 3 H]NE Release- The preparation of PC12 cell ghosts and the measurement of [ 3 H]NE release were performed as described (8). The techniques and protocols involved in overexpression of CAPS proteins in COS-1 cells, preparation of cytosol fractions from these cells, and [ 3 H]NE release assays were published previously (10). Eukaryotic expression vectors encoding full-length rat CAPS1 and mutant CAPS1-bearing mutations in the PH domain (R558D/K560E/K561E) were generated in pcDNA3.1/myc-His B (Invitrogen). A full-length human CAPS2 expression vector was generated in pEF6/myc-His C (Invitrogen). All recombinant proteins were expressed in COS-1 cells, purified by nickelnitrilotriacetic acid agarose (Qiagen), analyzed by SDS-PAGE and Coomassie Blue staining, and quantified with known amounts of CAPS1. RESULTS Structure and Conservation of CAPS Isoforms-By using MH domain profiles, we identified CAPS1 and a closely related murine protein fragment (mCPD2; GenBank TM /EBI accession no. AF000969) as distant relatives of members of the Munc13 family of synaptic vesicle priming proteins (11). Detailed analysis of the murine and human genomic Celera databases revealed that mCPD2 is part of a novel CAPS gene product, CAPS2. No evidence for additional CAPS genes in the current murine and human genomic databases was obtained. We assembled the complete murine and human CAPS2 coding sequences from genomic sequence data (Celera: GA_x6K02T2P3E9, murine; GA_x5YUV32W188, human) and then used the deduced coding sequences to design specific PCR primers and amplify the full-length human CAPS2 cDNA from a brain cDNA preparation. Three types of PCR amplicons were obtained, subcloned, and sequenced. They differed by the absence or presence of a 333-bp fragment in the 5Ј half and/or of a 120-bp fragment in the 3Ј half of the cDNA (Fig. 1). Because these regions correspond to individual exons in the murine and human genomic sequences, they are likely to represent alternatively spliced sequences. In the course of the present study, a murine CAPS2 cDNA sequence was deposited in GenBank TM / EBI under accession no. AY072800. This sequence is almost identical to the sequence we deduced from genomic data and was used for subsequent comparative sequence analyses. Comparison of CAPS sequences between species showed that the respective murine and human CAPS1 and CAPS2 orthologues are very similar to each other, with over 95% identity at the amino acid level. The corresponding sequence identity between the homologous isoforms was about 80% (Fig. 1). All CAPS proteins are large multidomain proteins containing a central PH domain and an MH domain in their C-terminal half. The novel mouse and human CAPS2 isoforms contain 1275 residues (146 kDa) and 1296 residues (148 kDa), respectively, and are slightly smaller than their CAPS1 counterparts which contain 1382 residues (156 kDa) and 1353 residues (153 kDa), respectively. Together, these findings show that CAPS isoforms in mammals form a small family of two highly homologous proteins with identical domain structure. Expression Patterns of CAPS mRNAs-To determine the expression patterns of CAPS mRNAs in different tissues, we hybridized RNA blots loaded with poly(A) ϩ -enriched RNA from different rat tissues at high stringency with probes representing rat CAPS1 (bp 152-997; GenBank TM /EBI accession no. U16802) and mouse CAPS2 (bp 10 -744; GenBank TM /EBI accession no. AF000969) cDNA fragments. CAPS1 mRNA was detected almost exclusively in brain. In contrast, CAPS2 mRNA was expressed most strongly in brain but was also detectable in lung, liver, kidney, and testis (Fig. 2). The sizes of the CAPS1 and CAPS2 mRNAs were very similar and in the range of 5.0 kb. Thus, the CAPS protein family, like most other protein families involved in regulated exocytosis, is composed of brain-specific and more ubiquitously expressed isoforms. Cellular Expression Patterns of CAPS mRNAs in the Central Nervous System-So far, our data demonstrated that the two CAPS isoforms are most strongly expressed in the central nervous system. On the other hand, CAPS1 was initially discovered in the context of LDCV secretion from PC12 cells (7-9), indicating a role in adrenal chromaffin granule secretion in vivo. To determine the cellular expression pattern of CAPS mRNAs in the brain and adrenal, we performed in situ hybridization experiments using representative mouse organs. We found that in the adult mouse brain, CAPS1 mRNA is strongly expressed in almost all nerve cells of the brain, although it is absent from glial cells (Fig. 3). In contrast, CAPS2 mRNA expression in the adult brain is less uniform, with high mRNA levels in cerebellum, cortex, olfactory bulb, CA1/CA2 regions of the hippocampus, and dentate gyrus, and levels below the detection limit in the CA3 regions of the hippocampus, striatum, thalamus, superior and inferior colliculi, and brain stem (Fig. 3). A similar picture emerged with respect to the brains of newborn mice, where CAPS1 mRNA was strongly expressed in almost all neuron-rich areas, whereas CAPS2 mRNA expression was less uniform and strongest in the developing hippocampus and cerebellum (Fig. 3). In the adult adrenal gland, CAPS1 mRNA expression was restricted to the chromaffin cells of the medulla (Fig. 3). CAPS2 mRNA levels in the adrenal were at the detection limit of our in situ hybridization assays (Fig. 3). These findings demonstrate that the two CAPS isoforms are differentially expressed in the central nervous system and neuroendocrine cells. Thus, despite their high degree of homology, the two CAPS variants may only be partially redundant in mammals. Expression Patterns of CAPS Proteins-For the isoform-specific detection of CAPS proteins, we generated polyclonal antisera to CAPS1 and CAPS2 using a GST fusion protein with a highly conserved CAPS1 sequence and a highly conserved CAPS2 peptide sequence as antigens (see "Materials and Methods"). The antibodies were found to be isoform-specific, as determined by Western blots of HEK293 cells overexpressing full-length rat CAPS1 or human CAPS2, where single bands of 150 kDa (CAPS1) and 140 kDa (CAPS2) were detected (data not shown). Moreover, the antibody to CAPS1 readily crossreacted with mouse CAPS1, as determined in Western blot analyses of wild-type and CAPS1-deficient brain samples (data not shown). 2 Likewise, the antibody to CAPS2, which was raised against a completely conserved mouse/human peptide, cross-reacted with mouse and rat CAPS2, as illustrated by the specific detection of a 140-kDa protein and a slightly smaller band in mouse and rat tissue samples (Fig. 4). The smaller bands at ϳ120 kDa that were detected in several tissues by our CAPS1 and CAPS2 antibodies do not originate from degradation (not shown) and most likely represent splice variants (Fig. 1). In agreement with the data obtained in RNA blot experiments, CAPS1 protein was found to be predominantly expressed in brain and, at lower levels, in pancreas and adrenal gland. No CAPS1 was detected in liver, kidney, testis, lung, heart muscle, spleen, and skeletal muscle (Fig. 4A). In contrast, high CAPS2 protein expression was not only detected in brain but also in liver and testis. CAPS2 levels in adult adrenal gland and lung were just above the detection limit of the Western blot assay, whereas no CAPS2 was detected in pancreas, kidney, heart muscle, spleen, and skeletal muscle (Fig. 4A). Within the central nervous system, CAPS1 protein expression was found to be uniformly strong in all brain regions except the spinal cord, where only low amounts were detected. CAPS2 protein expression, on the other hand, was strongest in the cerebellum, moderate in brain stem and thalamus, weak in hippocampus, hypothalamus, and superior/inferior colliculi, and at or below the detection limit in homogenates from all other brain regions tested (Fig. 4B). Analysis of cortical subcellular fractions showed a very similar subcellular distribution of CAPS1 and CAPS2 protein. Both isoforms are largely soluble, with a membrane-bound pool associated mainly with vesicular (LP2) and less with synaptic plasma membrane fractions (LP1) (Fig. 4C). Interestingly, the soluble pools of CAPS1 and CAPS2 were strongly enriched in synaptic fractions (LS2) (Fig. 4C). During brain development, CAPS1 protein expression is similar to that of synaptic markers. Expression is first detectable late in embryogenesis (embryonic day 14) and increases to reach a plateau about 20 days after birth, when most synapses have been formed. A smaller CAPS1 immunoreactive band, most likely representing a splice variant, is detectable only in late phases of development, at and after postnatal day 21 (Fig. 4D). In contrast, CAPS2 protein expression levels are more stable during development and somewhat higher in the embry- onic brain as compared with later phases of development (Fig. 4D). In summary, the high degree of homology between the two CAPS isoforms and their identical subcellular distribution in rat brain fractions indicate that CAPS1 and CAPS2 serve similar functions in the central nervous system. Their striking non-overlapping developmental and tissue expression patterns (Fig. 4, A and D; also see below), on the other hand, are compatible with the view that individual CAPS isoforms also play specific roles in certain cell types or during particular phases of development. Distribution of CAPS Proteins in Neural and Neuroendocrine Tissue-To compare the localization of CAPS1 and CAPS2 in the brain and adrenal gland, we performed immunocytochemical-labeling experiments on rat tissue sections using isoformspecific primary and fluorescently labeled secondary antibodies. CAPS1 distribution in hippocampus was very similar to that of the presynaptic marker synaptophysin. CAPS1 immunoreactivity was absent from the cell body layers of the hippocampus and dentate gyrus but abundant in the synaptic neuropil, except for the stratum lucidum and stratum lacunosum moleculare where CAPS1 was not detected (Fig. 5A). In contrast, CAPS2 immunoreactivity in the hippocampus was restricted to the polymorph layer of the dentate gyrus and, strikingly, to the stratum lucidum where CAPS1 was not detectable (Fig. 5A). The prominent nuclear labeling, which was obtained with the antibody to CAPS2 (Fig. 5, A and B), was also seen with the corresponding preimmune serum (not shown) and is, therefore, likely due to nonspecific cross-reactivity. In the cerebellum, the two CAPS isoforms showed a partially complementary distribution. CAPS1 was mainly detected in the glomeruli of the granule cell layer where it colocalized with synaptophysin, whereas CAPS2 was found almost exclusively in the synaptic neuropil of the molecular layer, also colocalized with synaptophysin (Fig. 5B). This differential distribution of CAPS1 and CAPS2 protein in cerebellum is in agreement with the distribution of the respective mRNAs (Fig. 3). Here, the predominant expression of CAPS2 mRNA in granule cells (Fig. 3) is the reason for the high CAPS2 protein levels in the molecular layer where granule cell axons terminate (Fig. 5B), whereas low levels of CAPS1 mRNA in the granule cell layer ( Fig. 3) are paralleled by low CAPS1 protein levels in the molecular layer (Fig. 5B). In the olfactory bulb, CAPS1 was detected mainly in the glomeruli, colocalized with synaptophysin (Fig. 5C), whereas CAPS2 was not detectable (data not shown). Similarly, with our staining and imaging methods, only CAPS1 but not CAPS2 immunoreactivity was detectable in the adult adrenal gland, where it was restricted to the medulla (Fig. 5D). This apparent lack of CAPS2 in the adult adrenal gland, as determined by immunostaining, correlates well with the barely detectable levels of the corresponding mRNA, as determined by in situ hybridization assays (Fig. 3). Only by the extremely sensitive Western blotting technique was CAPS2 detectable in homogenates of adult adrenal gland (Fig. 4A), indicating that CAPS2 protein is indeed expressed in this tissue during adulthood, albeit at extremely low levels. Taken together, our light microscope data demonstrate a strikingly complementary synaptic distribution of CAPS1 and CAPS2 in certain brain regions such as the hippocampus and cerebellum, indicating that different types of synapses employ different CAPS isoforms. At higher resolution, the CAPS1 localization within the glomeruli of the cerebellar granule cell layer appeared to overlap only partially with that of synaptophysin. In particular, peripheral regions of glomeruli contained synaptophysin but not CAPS1 (Fig. 6B). Indeed, costaining with antibodies to GAD65, a marker of GABA-ergic terminals, revealed very little colocal- Blots containing poly(A) ϩ -enriched RNA from the indicated rat tissues (Clontech) were hybridized at high stringency with uniformly labeled probes from the coding regions of rat CAPS1 and mouse CAPS2 and exposed to film for 20 h. Arrows indicate the specific CAPS1 and CAPS2 mRNA bands. Note that CAPS1 mRNA is expressed in a brain-specific manner, whereas CAPS2 mRNA is also expressed outside the nervous system. ization of CAPS1 with GAD65 (Fig. 6A), indicating that CAPS1 is absent from GABA-ergic Golgi cell nerve terminals and essentially specific for mossy fiber terminals within the glomeruli of the cerebellar granule cell layer. This notion was supported by an electron microscopic investigation in which we found CAPS1 to be localized specifically to mossy fiber terminals in the cerebellar granule cell layer but absent from neighboring symmetric, presumably GABA-ergic terminals (Fig. 6, C and D). In the molecular layer, CAPS1 was found to be very rare and was occasionally detected in dendritic compartments (Fig. 6E). In the hippocampus, CAPS1 was found mainly in asymmetric, presumably glutamatergic terminals (Fig. 6F). Thus, CAPS1 appears to be specific for glutamatergic terminals. So far, our antibodies to CAPS2 have not provided reliable staining at the ultrastructural level (data not shown). Comparison of CAPS1 and CAPS2 Function in Regulated Exocytosis-To test whether the novel CAPS2 isoform is similar to CAPS1 with respect to its role in the Ca 2ϩ -dependent triggering step of LDCV exocytosis, we examined the ability of proteins purified from COS-1 cells overexpressing wild-type CAPS1, wild-type CAPS2, or an inactive form of CAPS1 (R558D/K560E/K561E) to reconstitute the Ca 2ϩ -dependent triggering step of LDCV exocytosis from permeable PC12 cells. The purified dysfunctional CAPS1 R558D/K560E/K561E mutant (10) showed only background activity (Fig. 7). In contrast, wild-type CAPS1 or wild-type CAPS2 exhibited significantly higher levels of activity that were similar for reconstituting the Ca 2ϩ -dependent triggering step of LDCV exocytosis from permeable PC12 cells. These results indicate that CAPS1 and CAPS2 are functionally equivalent regulators of Ca 2ϩ -dependent LDCV exocytosis. DISCUSSION CAPS1 was initially discovered as a cytosolic protein that functions in Ca 2ϩ -dependent exocytosis from permeable PC12 cells at a step that follows ATP-dependent priming (7)(8)(9). How this biochemically defined ATP-dependent priming step in the secretory process relates to the LDCV priming steps that are determined by patch-clamp amperometry with high temporal resolution is unclear. Later studies revealed an LDCV-specific role of CAPS1 (15) that correlated well with the fact that membrane-bound CAPS1 in brain homogenates is associated with LDCVs but not with synaptic vesicles (16). However, the molecular basis of CAPS1 function has remained unknown. Its limited homology to Munc13 proteins in a region that is involved in syntaxin binding to Munc13s indicates that CAPS1 might serve a role in LDCV exocytosis analogous to the priming activity of Munc13s, which appears to involve modulation of syntaxin function (25,26). In the present paper, we describe a novel CAPS isoform, CAPS2. The same isoform was very recently discovered in an independent study searching for novel interactors of dystrophin. However, proof of an interaction of CAPS2 with dystrophin beyond yeast two-hybrid data was not provided, and CAPS2 was characterized only at the level of its gene structure and mRNA expression using PCR and Northern blotting (27). CAPS2 is highly homologous to CAPS1 (78 -79% identity). Indeed, residues that are essential for CAPS1 function, such as W537, K538, R540, R558, K560, or K561 in the PH domain of rat CAPS1 (10) are conserved in CAPS2 (Fig. 1). As a consemental stages were designated as follows: En, embryonic day n; Pn, postnatal day n. Note that the subcellular distribution of CAPS1 and CAPS2 is very similar, with a large soluble synaptic pool in LS2 and a small membrane bound pool associated with the vesicle fraction LP2. In contrast, striking differences between CAPS isoforms are apparent with respect to tissue, brain region, and developmental expression patterns. FIG. 4. CAPS protein expression in different tissues, brain regions, brain subcellular fractions, and developmental phases. Homogenates from the indicated organs (A), rat brain regions (B), rat cortex subcellular fractions (C), and brains from mice of different ages (D) were analyzed by Western blotting using specific antibodies to the indicated proteins (arrows). Subcellular fractions were designated as follows: H, homogenate; P1, nuclear pellet; P2, crude synaptosomal pellet; P3, light membrane pellet; S3, cytosolic fraction; LP1, lysed synaptosomal membranes; LP2, crude synaptic vesicle fraction; LS2, cytosolic synaptosomal fraction; S1, supernatant after synaptosome sedimentation; LS1, supernatant after LP1 sedimentation. Develop-quence, CAPS2 is functionally equivalent to CAPS1 with respect to its role in LDCV exocytosis from PC12 cells (Fig. 7). In addition, the subcellular localization of CAPS1 and CAPS2 within the cortex, as determined biochemically, is very similar (Fig. 4C). Finally, just like CAPS1 (Figs. 5 and 6), CAPS2 appears to be a presynaptic protein in the central nervous system, as illustrated best by the strong mRNA expression in cerebellar granule cells accompanied by a strong protein expression in the cerebellar molecular layer where the granule cell axons terminate (Fig. 5B). Taken together, these data indicate that CAPS1 and CAPS2 have very similar molecular (10). Incubations for Ca 2ϩ -triggered norepinephrine release were conducted for 3 min with the indicated proteins purified from COS-1 cells overexpressing wild-type CAPS1 (ࡗ), wild-type CAPS2 (f), or an inactive form of CAPS1 with indicated mutations in the PH domain (OE). Maximal Ca 2ϩ -dependent norepinephrine release obtained with 20 nM CAPS1 was set as 100%. Results are representative of two independent experiments, with data shown as the means of duplicate determinations with indicated range. functions in a very similar subcellular compartment. In that context, the striking differences between CAPS1 and CAPS2 with respect to their expression in different tissues (Fig. 4A), different brain regions (Fig. 4B), different synapse populations (Fig. 5, A and B), and different developmental phases (Fig. 4D) most likely reflect as yet unidentified subtle functional differences that are specifically required in the respective tissue or brain area or during a particular developmental period. In view of the apparently specific role at least of CAPS1 in LDCV exocytosis, the possible function of CAPS2 expressed in liver, lung, or testis (Fig. 4A) remains unclear. To address these open questions, we have initiated a detailed genetic study involving ablation of the two CAPS genes in mice. Presently available data indicate an LDCV-specific function of CAPS1 (15,16). This is supported by the detection of CAPS1 mRNA and protein in adrenal slices (Figs. 3 and 5D). In view of these published and novel findings, a striking and surprising observation of the present study was the highly specific and strong presynaptic accumulation of CAPS1 and CAPS2 (Fig. 5). In the case of CAPS1, this presynaptic localization was verified by electron microscopic observations (Fig. 6, C-F) and appeared to be specific for asymmetric glutamatergic synapses (Fig. 6, A-D). Electron micrographs of mossy fiber terminals in the cerebellar granule layer (see example in Fig. 6C) typically showed that entire synapses were filled with CAPS1 immunoreactivity even when no LDCVs were present, and accumulations of immunoreaction product were only sporadically associated with LDCVs (Fig. 6C, arrow). Admittedly, our electron microscopic method did not reach the level of resolution of immunogold labeling, and it does not, in principle, allow one to draw conclusions about the subcellular compartment or secretory vesicle type that CAPS1 is bound to and acts upon. Nevertheless, the abundance of CAPS1 (and CAPS2) in presynaptic terminals is very suggestive of an additional function of these proteins that is not linked exclusively to LDCV exocytosis. Apart from a direct, possibly Munc13-like role of CAPS proteins in classical synaptic transmission, such LDCV-independent functions of CAPS could also involve a role in the delivery of active-zone transport vesicles to synapses (28). Again, detailed genetic studies in vertebrates will be helpful to resolve this problem. In summary, we have identified a second mammalian CAPS isoform, CAPS2, whose structure and function are very similar to those of CAPS1 and which may function in a very similar subcellular context, i.e. in neuroendocrine cells and presynaptic terminals of the central nervous system. Differences in the spatial and temporal expression patterns of the two CAPS isoforms most likely reflect as yet unidentified subtle functional differences required in particular cell types or during a particular developmental period. The abundance of CAPS proteins in synaptic terminals indicates that they may also be important for neuronal functions that are not related exclusively to LDCV exocytosis.
2018-04-03T02:46:30.442Z
2003-12-26T00:00:00.000
{ "year": 2003, "sha1": "16939a6a00088f7105aaa3140962fb6bff0a9f44", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/52/52802.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "83ea8815ea3c7921bc010cb6b653d3c1a511b209", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
156274859
pes2o/s2orc
v3-fos-license
Adaptation Planning Support Toolbox: Measurable performance information based tools for co-creation of resilient, ecosystem-based urban plans with urban designers, decision-makers and stakeholders science. With more and more cities worldwide that will make the step from policymaking to actual adaptation-inclusive urban (re)development practice we foresee a growing demand for such tools. Adaptation of urban areas The need for adaptation of urban areas to changing climatic conditions is widely recognized (Deltaprogramma, 2015;IPCC, 2007IPCC, , 2012;;PROVIA, 2013).Flooding, drought, heat stress and related problems with water quality, water supply and land subsidence, aggravated by the UHI effect, are increasing hazards threatening the liveability of our urban areas as well as our social and economic urban systems (Albers et al., 2015;Jha et al., 2012;Rovers et al., 2014;World Bank, 2010;Zevenbergen et al., 2010).Risks are further increased by on-going urbanization (Nichols et al., 2007;UN DESA, 2014) and by intensification of urban land use; the invested capital and the asset value of buildings, infrastructure and industrial facilities has increased drastically over the past decades (Kind, 2013).Although the need for adapting our urban environments is clear, in practice adaption is difficult.Opportunities for adaptation are often limited to new development projects, to large infrastructural renovation and renewal projects or to initiatives from individual residents (Van der Brugge and De Graaf, 2010). Adaptation requires the construction of structural or "hard" adaptation measures (Hallegatte, 2009;Pelling, 2011).Such measures are physical or technological interventions, constructed facilities that require space and therefore are subject of spatial planning and design (Taylor and Wong, 2002).This article will focus on the right design of structural adaptation measures, as embedded in a planning process that leads to a decision on a spatial adaptation plan. The pallet of adaptation measures has extended dramatically over the past decades.Earlier, Sustainable Urban Drainage Systems (SUDS) (CIRIA, 1998;Svenske Vatten-och Aflopsverksföreningen, 1983) and Water Sensitive Urban Design(WSUD) for urban drainage (Brown et al., 2008;Engineers Australia, 2006), nowadays also known as green or blue-green infrastructure, were introduced.Maksimovic et al. (2014) recently argue that a new concept of Multiple-Use Water Services (MUS) is emerging.MUS solutions enhance the synergy of urban water (blue) infrastructure with green assets and ecosystem services, are economically viable and climate (environmental) adaptive. Ecosystem-based Adaptation (EbA) is at the heart of this MUS development.EbA-measures integrate the use of biodiversity and ecosystem services into an overall strategy for helping people adapt to climate change (Munroe et al., 2012).In addition to flood control, drought mitigation and heat stress reduction they provide e.g.aesthetic quality, recreational and restorative capacity and health benefits (Opdam et al., 2009;Van den Berg et al., 2007;Van den Berg et al., 2015).This article shows how planning 'blue-green' EbA measures is used to advance climate resiliency, while maximizing their co-benefits. Adaptation planning Urban planning exists of a series of more or less consecutive phases starting from system analysis and program development (initiative phase), via conceptual, preliminary and final design (design phase) up to implementation (Fig. 1).The process ends with a final decision on an adaptation or (re)development plan.Although shown as a straightforward, stepwise process in theory, the process in practice often reiterates to an earlier stage to investigate alternative adaptation pathways. Many guidelines on climate resilient urban planning provide procedures for hazard, exposure and vulnerability analysis and an overview of potential solutions and/or best practices (Challenge for Sustainability; Climate-ADAPT; Deltaprogramma N&H, 2014; EPA; Great Lakes and St. Lawrence Cities Initiative; PROVIA, 2013).They however lack guidance where it comes to the selection of appropriate packages of adaptation measures during the initiative and design phases (Voskamp and Van de Ven, 2015).For these phases tools seems unavailable to support stakeholders to make hard choices which adaptation measures are attractive and effective for the project area (Bours et al., 2014;PROVIA, 2013); this while complex simulation models to evaluate the expected hydraulic and hydrological performance of the final plan are readily available (Lerer et al., 2015) In the initiative phase, urban planners are often in the lead of the process.Eliasson (2000) showed that climatology so far has a low impact on the planning process; urban planners' use of climatic information is unsystematic as the urban climatologists fail to provide them with good arguments, suitable methods and tools.This underlines the need for a planning support system that bridges the gap between urban planners and engineers; she makes a plea for a "communicative approach" to the planning process. Adaptation support tools for collaborative planning Involvement of local stakeholders, land & water engineers, experts from other disciplines and decision-makers is considered essential in particular in planning reconstruction of existing urban areas.Each of them not only has different interests, agendas and roles in the process.They differ in their sense of urgency of the problem, their approach to the problem, their language and knowledge level, and their rationality regarding potential solutions (Van Stigt et al., 2015).Design workshops during the initiative phase are meant to get to know each other, to share each other's knowledge and understanding of the problems and to collectively identify interesting adaptation solutions. Question is how to support the planners, stakeholders and decision-makers in this analysisdialoguedesign-engineering process with knowledge and information, in order to get a converging learning process that leads to a final positive decision on an adaptation plan?Such planning support tools should raise awareness, present the broad range of adaptation options, let participants explore the impact of different design choices on the climate resiliency of their project area (Pelzer et al., 2013) and maximize the co-benefits of adaptation measures. Goal of our study was to develop a toolbox that supports the incorporation of climate adaptation in the actual planning and design practice in cities.This Adaptation Planning Support Toolbox was developed to provide urban planners, landscape architects, civil engineers and local stakeholders and decision makers with quantified, evidence-based information on the climate resilience of their ideas in early phases of the planning process and to facilitate decision-making during conceptual design workshops.In design workshops the toolbox should supports them in how to share their knowledge and discuss alternative measures, including location, size, costs and (co)benefits.2. Toolbox to support adaptation planning 2.1.An integrated 'dialoguedesignengineering' planning process The Adaptation Planning Support Toolbox was developed to effectively support the collaborative planning process in the phases of program development and/or conceptual planning.See Supplementary Material part A for underlying principles and concepts.Two actual tools were developed to support the 'dialogue designengineering' planning process (Fig. 1).The Climate Adaptation App (climateApp) informs participants about more than 120 potential adaptation measures and produces a long list of relevant measures.The Adaptation Support Tool (AST) guides stakeholders in the next step, the conceptual design.Resulting conceptual plans are input for urban planners and designers, to make detailed preliminary designs. The climateApp and the AST are both web-based software tools running on touch enabled hardware.This because a touch table facilitates 'reasoning together', is community supportive, empirically based, experimentally oriented and information and knowledge disseminating (Geertman, 2006). Climate adaptation app The Climate Adaptation App was developed to start the design workshop with overview and pre-ranking of potential measures for all participants (www.climateapp.orgor Appstore/Playstore).From different publications (Pötz and Bleuzé, 2012;Van de Ven et al., 2009;Vergroesen et al., 2013) a list of over 120 structural adaptation measures was composed.The app provides information on each measure and ranks measures for potential applicability based on local circumstances and preferences by toggling the different filters (Fig. 2). Design workshop participants go through the list and discuss applicability and attractiveness of potential measures to create a long list for their project area. Adaptation support tool The Adaptation Support Tool (AST) is a touch-table based platform that design workshop participants may use to select adaptation interventions, situate them in their project area and immediately see an estimate of their effectiveness and costs (Fig. 3).The AST consists of a left panel for input, a middle panel for design (map of project area) and a right panel as an "AST dashboard" for output. The current AST version includes a long list 62 blue, green and grey adaptation measures for reduction of pluvial flooding, drought and heat stress (see supplementary material C), a selection assistant for ranking their applicability and an assessment tool to estimate the effectiveness of applied measures.The left panel shows a ranked list of adaptation interventions.The long list of measures has been composed from multiple inventories found in literature.The selection of measures was based on criteria that differ for blue-green and for grey adaptation interventions.As many blue-green interventions were included that the authors and project partners are aware of from both literature and practice.We however selected traditional/grey solutions in such a way that a comparison between traditional and blue green solutions can be made when planning alternative solutions and because traditional (Voskamp and Van de Ven, 2015).These targets differentiate between threshold capacity for damage prevention and coping capacity for damage minimization in case of a failing protection system (De Graaf et al., 2007). In the central panel different map layers can be shown.Default a Google Earth and OpenStreetMap layer are provided, with layers like surface elevation, land ownership, flood depth, heat stress maps or future land use as additional.Design workshop participants can now select a measure from the list left and draw it in the project area on a map layer, on the location where they think that it would provide added value.For example, the user can apply a green roof on a large flat roof, install permeable pavement on sidewalks and artificial wetlands near the outlet of a tributary drain.Next, the tool requests the water storage depth of the measure and the additional contributing inflow area. On the basis of this input, the AST estimates a number of performance indicators, e.g.storage capacity, normative runoff, heat stress reduction, water quality effects, costs and additional benefits.These performance estimates are shown on the right panel.Under the Details tab (not shown) the contribution of each proposed measure to the adaptation targets is given in combination with the estimated costs for realization and maintenance.Users can also switch to the Overview tab of the right panel, as shown in Fig. 3, to get a summary of the measures and their total effectiveness in relation to the adaptation assignment. Results of a session can be saved as snapshots and re-opened at a later moment.This way alternative plans can be created and compared.The tool is web-based and can run both on a webserver and standalone. Adaptation performance indicators The current selection of performance indicators was based on the demand of participants of the design workshops and the role of water as the key to a climate resilient urban environment.The indicators are listed and explained extensively in the Supplementary Material part B, including underlying scientific evidence.The quantified performance indicators include estimated changes of physical characteristics that are relevant for damage reduction, resilience, public health and feasibility. -Prevention of flooding due to extreme rainfall requires effective storage (retention) of water as well as peak flow reduction.Created storage volume is shown, as this has to comply with the target volume that our water managers set to reduce pluvial flood risk.The normative runoff frequency allows for estimation of flood risk reduction in terms of a reduction in frequency of a certain peak flow.Estimates of these flood prevention indicators are based on the result of simulation of the effect of a specific adaptation measure, using long time series of rainfall and evaporation -30 years or more -, a climate change scenario, a multi-reservoir rainfall-runoff water balance simulation model, a theoretical design of the intervention and extreme value analysis to quantify changes in effective storage capacity and peak flow reduction.Parameters characterising the hydrological performance of the specific adaptation measures were taken from experimental results reported in the international scientific literature.-Drought control requires groundwater recharge information and inter-seasonal storage of water, in particular in areas prone to land subsidence or a lack of replenishment due to soil sealing.On the other hand, in case of very shallow groundwater tables high recharge rates would lead to the need for subsurface drains.Estimated groundwater recharge also results from output of the multi-reservoir simulation model and a theoretical design of the intervention.Average annual recharge change is calculated as a performance indicator.-Heat stress reduction is achieved by provision of shade and evaporative cooling from vegetation and water surfaces; though, to that end vegetation has to have enough water available, which is related to groundwater recharge.Heat stress reduction is based on the reported observed cooling effect of blue-green infrastructure in Dutch urban areas and scaling based on the dimensions of the measure.-The quality of the water is essential for the functions and services it can provide.To evaluate potential functionality water quality improvement of the blue, green and grey adaptation measures is expressed by three indicators: nutrient reduction, absorbed pollutants reduction and pathogen reduction.These water quality performance indicators are determined as a pollution reduction factors based on recorded effectiveness of treatment processes in a facility and scaling based on the dimensions of the measure.Nature based treatment processes included in the pollution reduction factor include natural degradation, settling and soil filtration.For intensive green roofs fertilization was included as a negative pollution reduction factor for nutrients.-Average costs of construction and costs of management and maintenance are estimated for each adaptation measure based on unit prices on the Netherland's market. The purpose of the AST is to provide estimates on the effectiveness and costs of adaptation interventions in the early planning phase of urban (re)development projects, in order to meet adaptation targets.Such targets can be met by different packages of measures.No framework or guidelines are provided for the selection specific adaptation measures; the AST allows for any strategy to reduce its vulnerability (De Graaf et al., 2007).The actual effectiveness and costs will depend on the implementation which is determined by exact local physical conditions, and specific wishes and ambitions of the stakeholders. AST applications In the period 2014-2015 the Adaptation Planning Support Toolbox has been used in adaptation processes in different cities (Table 1).Being both AST developers and participant, we learned valuable lessons concerning the optimal use of the Toolbox for local adaptation process.Two examples are briefly addressed here. Beira, Mozambique The city of Beira (Mozambique) frequently floods by heavy rain, having serious health and economic impacts for the 0.5 million residents.Blue-green adaptation measures may increase water retention capacity and will improve the liveability.Discussing adaptation strategies with local Beira stakeholders in a workshop setting has been done based on the following steps (Picketts et al., 2012): Building capacity Municipal civil servants, representatives of the Chota neighbourhood (pilot area), and local university staff (UCM) were briefed by the authors (acting as facilitators) on climate adaptation and the key role the workshop participants have in adaptation planning as experts with important local knowledge. Identifying local impacts and vulnerabilities Climate information was distributed before and during the workshop.It included information on hydrology in urbanized delta regions, flooding maps of Beira based on 3D aerial information, historical climate information and future predictions.The maps and explanation provided a good overview of the impacts and vulnerabilities of Chota and surroundings, including underlying mechanisms.For most workshop participants especially the hydrodynamic information was new, enabling them to better The AST can also be applied as a quick-scan method to assess if e.g.green roofs have an added value for specific urban areas. identify the causes of flooding and the ways flooding can be prevented. Determining priorities and outlining implementation The workshop participants defined short and long term targets to prevent frequent and large-scale flooding of their residential areas in the future.The facilitators calculated the overall retention capacity to achieve these goals.The facilitators then explained about the AST: the goal, the lay out of the AST tools, the range of measures and underlying data.Based on local knowledge the participants selected a number of measures that fit local physical conditions and culture: surface water bodies (channels, small lakes, lagoon), multifunctional green (public green fields that can be inundated temporary).Measures demanding high-level construction and maintenance (e.g.green roofs, technical installations) were rejected, not fitting the local possibilities in water management.Locations within the Chota area where measures could be implemented were identified (Fig. 3).For each location and accompanying measures the AST calculated water retention capacity and other parameters, based on local meteorological data.By doing so, it became clear for the participants that additional retention nearby Chota was needed, resulting in a proposal for a lagoon development adjacent to Chota.Through field visits the workshop participants together with the municipal board verified whether implementation of the measures (including lagoon) was indeed possible.Most of the recommended interventions were accepted by the municipal council; in one occasion however a land development claim became the topic of discussion, because this development would decrease retention opportunities for the larger area.The mayor of Beira expressed his intention to reject that claim.The total set of measures was further elaborated on a map andtogether with the other informationpresented in a report (Kalsbeek, 2015).See this report for more details and background information on this case.The Chota adaptation plan as composed by the workshop participants and their facilitators was also welcomed at an international financing meeting in September 2015; it now serves as the outline for detailed design of drainage improvement works. Utrecht, the Netherlands In the redevelopment of the Utrecht City Centre À West, there is a need for a more climate resilient, attractive and pleasant accommodation area.Using the AST, stakeholders sketched three alternative plans, selecting different adaptation measures they deemed applicable and effective.Two of these alternatives can be seen in Fig. 4. To make the area more attractive and to reduce the heat stress emphasis was put on greening the area, both at street level and by creating green roofs and urban agriculture on the roofs of the large exhibition halls in the area.Stormwater retention capacity was also created by installing rain tanks, a water square and application of porous pavements.The design workshop participants managed to meet the climate adaptation targets they had set in advance, while creating substantial co-benefits for themselves, for future residents and for the numerous visitors of this area (Van de Ven et al., 2016a,b). Building capacity Representatives of different municipal offices (including urban planning, health, water management, urban green and project development) and representatives of the private stakeholders participated in the Climate KIC Smart Sustainable District project on the sustainable and climate resilient redevelopment of the Utrecht Centre West area and two design workshops.These parties learned about the vulnerability of the area for flooding, drought and heat stress and about the many potential solutions that can be used to strengthen resilience, meanwhile delivering substantial ecosystem, economic and social services. Identifying local impacts and vulnerabilities National climate change scenarios are available for the Netherlands (KNMI, 2015).Flood hazard maps and heat stress maps showed significant climate risks in the project area.Drought however turned out to be less of an issue.An attempt to map all critical and vulnerable objects, networks and population groups for a risk assessment turned out to be complicated.Information is scattered over very many desks.Impacts and vulnerable spots were recognized by the participants of the design workshop. Adaptation targets for stormwater retention, peak flow reduction and heat stress reduction were quantified on the basis of these climate and land use projections.These targets, though negotiable, are used to evaluate performance of the packages of adaptation measures. Determining priorities and outlining implementation The workshop participants first used the climateApp to get an overview of potentially applicable solutions; most of them were not familiar with the large variety of potential adaptation measures They discovered and learned about other solutions.After that first step they started discussing the applicability and attractiveness of implementing specific adaptation measures on specific sites in the project area.Two alternative plans emerged from this discussion: a blue-green alternative and a high density urban alternative.Both alternatives did not completely meet adaptation targets.That is why a third alternative was produced, the Plus alternative.This alternative combines measures from both the Green and the Urban alternative and meets the adaptation targets on storage/retention capacity and peak flow reduction.Heat stress reduction targets are met at all places where people stay, walk or bike.A first analysis was made of the ecosystem services, the economic and social benefits of the proposed alternative adaptation plans as well as a qualitative analysis of who benefits from implementing the Plus alternative and in which way. The blue-green adaptation plans are now being merged with the mobility adaptation plan and the energy transition plan for the project area to produce comprehensive redevelopment plan alternatives.These alternatives will be used in 2016 (a) to evaluate if adaptation targets are still being met, (b) as input for public engagement sessions and (c) as basis for a value case analysis.This value case analysis is meant to specify the benefits and the beneficiaries of the redevelopment plan in more detail and use this as a basis for a fair distribution of investment and maintenance costs.Results of this value case analysis are meant to support final decision making in 2017 by the City of Utrecht and private stakeholders and project developers on the urban and economic development of the Utrecht City Centre-West area. Addressing adaptation in city planning and design Local adaptation of our urban infrastructure, buildings and environment is required to minimize negative consequences of climate change.A wide variety of blue, green and grey infrastructural measures is available to strengthen resilience against flooding, drought and heat stress.Decisions are to be taken about adaptation targets and about where and how which adaptation measures are to be located.Such an adaptation plan is to be produced in a collaborative planning process of urban planners, engineers other experts, local stakeholders and political decision makers. Overall, more and more cities recognize the need for adaptation at a policy-level, but lack the practical instruments to go from vulnerability assessments towards adaptation-inclusive urban planningsee e.g.[ND-GAIN, 2016]and lack of support for adaptation investments.Moreover, adaptation is a relative new phenomenon, not considered by everyone as his/her responsibility (Nalau et al., 2015).Investors seem to focus on cost reduction rather than on long term benefits of implementing adaptation measures.The fact that most ecosystem-based adaptation measures not only reduce vulnerability of the urban environment to extreme weather events but also produce substantial economic, ecological and social benefits for the citizens is often overlooked, let alone maximized in spatial planning, partly due to the fact that these benefits are hard to quantify.This lack of quantitative information is partly overcome by implicit evaluations that take place while the participants in this collaborative planning process evaluate the performance information produced by the AST. Role of tools in planning for climate resilience Urban planning and design routine is not equipped yet to easily incorporate climate proofing.To gain public support, there is a need for stakeholder participation when addressing adaptation in city practice (Hurlbert and Gupta, 2015).In a collaborative planning workshop based setting local stakeholders are able to provide their implicit knowledge of the area and of the community's preferences (Picketts et al., 2012;Van Stigt et al., 2015).Many stakeholders however are not aware of the large variety of adaptation options to choose fromthe AST contains 62 -, each with their own pro's and con's.Planning and decision support tools for climate resilient urban design should therefor support knowledge sharing and collaborative exploration of alternative adaptation solutions in community-based meetings.To effectively support policy making, planning support tools should bridge the gap between the worlds of scientific expertise and self-organised adaptation in urban reality (Larsen et al., 2012;Löschner et al., 2016) and that of the creative urban planner.Pyke et al. (2007) conclude that the existing decision support systems are more effective when they balance the provision of information with concern for organizational and political processes. Application experiences with the Adaptation Planning Support Toolbox The Adaptation Planning Support Toolbox has effectively supported climate-proof planning in several cases on different continents.Participants of the design workshops expressed their satisfaction with the way the planning process was structured, with the ranked overview of potential blue, green and grey adaptation measures and with the estimates of the effectiveness and the costs of proposed measures; this information supported a learning process and informed decision making.Concerns on organizational or political issues around details of the plan were discussed among participants at the design table.As such there seemed no need to include such issues explicitly in the tool. The toolbox builds on the results of vulnerability assessments and on the willingness to adapt, as e.g.analysed with the Uniform Adaptation Assessment (Chenchen, 2015) or the Climate Stress Test (Deltaprogramma N&H, 2014).Flood hazard maps, heat stress maps and water balance calculations provide valuable information on where to concentrate adaptation efforts.In practice it turned out to be hard to formulate adaptation targets for drought and heat stress.The AST was in such cases used to explore the feasibility of a certain impact reduction. The use of the AST in design workshops requires skilled facilitation.The dialogue that takes place around the design table benefits from an independent facilitator.Moreover, the use of the AST proved to be complex for participants that are not familiar with design workshops and/or with the wide range of potential adaptation measures.In practice, the facilitator or another professional that is trained in the use of the tool assists the application.The Climate Adaptation App is available as a standalone tool, because this tool can be used for individual learning by professionals and non-professionals around the globe. And although decisions on the application of adaptation measures suffer from deep uncertainties on expected climate change and exposure, we have seen in practice that many adaptation measures are selected because of the expected cobenefits of the blue, green and grey measures for the liveability and economic functioning of the urban environment; climate resilience was dealt with as a valuable co-benefit rather than a primary target À as long as adaptation targets were met. As concluded by Pelzer et al. (2013), the use of a touch table during the design workshops proved to be effective in supporting the planning process.The use of the touch table supports learning processes and stimulates thinking beyond the own professional roles.Moreover, the performance indicators shown on the touch table forced participants to be explicit about their proposed interventions and the expected effectiveness.The struggle they reported of the urban planners with the application of the touch table is interesting.Designer's working practice, to which intuitive sketching and visualization are central, is disrupted by the use of the touch table.This was solved by having regular maps, transparent and drawing pens next to the touch table, so that they could sketch their ideas when they felt the need for it.According to Pelzer et al. (2013) designers also felt the integral approach as a barrier to their creativity.This could not be confirmed in our workshop, potentially because the objective of our workshop was more specific than the objective of their workshopcreate a more climate resilient and attractive urban area versus planning a more sustainable new urban area. Usability and reliability Performance indicators produced by the AST and used to select and plan adaptation measures are based on evidence-based key figures on the characteristics, performance and costs of each adaptation measure retrieved from international literature (De Jong et al., 2014, 2015;Geisler and Barjenbruch, 2015;Kosteninformatie.nl, 2015;Vergroesen et al., 2013).They are also based on conceptual modelling of the measure's performance using local climate and land use conditions.Although the accuracy is limited we argue that this information is reliable enough to compare different measures and different alternatives and to find a common preference with all participants.Arguments to decide on a specific choice are exchanged, while keeping an eye on their contribution to the adaptation targets and on their cost-effectiveness. Conceptual designs are so far made without quantified information on performance of proposed adaptation measures; the availability of a more or less reliable performance and cost estimation is a valuable contribution to informed decisions on the selection and design of adaptation measures. The Toolbox is used for planning problems at building to district scale; use at larger scale level is questionable because the tools do not consider interconnections and flow capacities between adaptation measures.Estimated performance at larger scale could consequently be misleading. Moreover, the AST shows only performance indicators regarding climate resilience in relation to the water system and estimated costs; other benefits and co-benefits of the measurese.g.landscape quality, added economic and social valueare not quantified, but in practice play an important role in the dialogue and selection decisions of the workshop participants.Quantified information would give the benefits a more equal treatment in the selection and decision making process as compared to the costs. Research to find out which information on co-benefits session participants would like to see on the AST dashboard is on-going. Measures against heat stress tend to have local effects.In order to evaluate heat stress control measures we would like to visualise the local cooling effects of planned blue green measures in a map instead of presenting a general decrease in average areal temperature as a figure on the dashboard.We planned to realise this functionality in 2016. Another relevant question is who should participate in the design workshops.Participation of urban planners, landscape architects, water managers, civil engineers, local stakeholders and other experts is evident.But how about participation of city council members and commercial developers?The fact that city council members participated in the design workshop in Beira turned out very effective for further decision making.In other cases political decision makers were not invited by the host of the design workshop; further study is required to evaluate the impact of their participation. The toolbox was used both in the Netherlands and abroad.For the applications in Beira and London the key figures for calculating the performance indicators of each measure had to be calculated with the local climate and local land use data.So far this has been done manually and has required substantial effort.For easier applications abroad this process could be automated.Cost figures remained unchanged so far; if local unit cost figures are available these can be brought in the tool without much effort.Moreover a stronger coupling (export-import function) of the AST with hydraulic and hydrological simulation models for plan evaluation would be convenient. Conclusions There is a gap in the tools available to support resilient, climateproof urban planning.Tools and procedures are available for climate vulnerability assessment and for evaluating the performance of final designs with the help of simulation models.But tools that have the ability to support implementing adaptation in the actual urban planning and design practice, i.e. to support defining the program of demands, setting adaptation targets, for selecting adaptation measures from a wide variety of blue, green and grey adaptation measures and for informed co-creation of a conceptual design, seem to be missing. To close this gap and support the planning of a climate resilient urban environment we developed and tested an Adaptation Planning Support Toolbox.The toolbox contains a Climate Adaptation App (climateApp) and the Adaptation Support Tool (AST).From our applications so far we conclude that this Toolbox meets the demands of local policymakers, planners, designers and practitioners to provide evidence-based support for their collaborative analysis Àdialoguedesign-engineering (= planning) sessions.Participants appreciated the AST because its overview and pre-ranking of a wide range of potential adaptation measures, the possibility to create different adaptation design options (scenarios) for their own project area, and to explore the contribution of these options on adaptation targets and co-benefits.Discussions on the design table were focussed on the opportunities and the benefits of specific interventions, rather than on the costs.The combination of informing, exploring and testing at the same time, and doing this in a collaborative dialogue with relevant stakeholders, is considered as of added value to current adaptation planning practice. Essential is that urban planners, landscape designers, water managers, urban green managers have to learn how to combine their working practice in such a collaborative planning and design process.This transition requires courage and perseverance from all parties, and will lead to further development of the toolbox or similar tools.With more and more cities worldwide that will make the step from climate policymaking to an actual adaptationinclusive urban (re)development practice we foresee a growing demand of tools like the climateApp and the AST to ensure that adaptation will be seriously adopted by the local actors while maximizing the social and economic co-benefits of the adaptation measures. Fig. 1 . Fig.1.Adaptation planning process, stakeholder engagement and planning support tools.Both tools (bold) in the Adaptation Planning Support Toolbox will be discussed in this article. Fig. 2 . Fig. 2. Screen of the Climate Adaptation App (www.climateapp.org).Adaptation measures are ranked by toggling the filters.More information on a measure is obtained by clicking the tile. Fig. 3 . Fig. 3. Screen components of the Adaptation Support Tool.Left on the touch screen is the ranked list of 62 adaptation measures.Selected measures are planned in the project area (middle).At the right side the AST dashboard, showing the resilient performance of the total package of measures and of each active measure.Shown is the application in of the AST in Beira, Mozambique. Fig. 4 . Fig. 4. Example of AST application: Two of the alternative conceptual adaptation plans for the Utrecht City Centre-West, each with its own set of adaptation measures and, consequently, a different contribution to adaptation targets and co-benefits.The Green alternative (right) proved less effective than the Plus alternative (left) (Van de Ven et al., 2016a,b).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Table 1 Overview of applications of the Adaptation Planning Support Toolbox.
2019-05-18T13:06:47.875Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "e97045ee74ef59450a78b8de6b8ff7a6516c58f7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.envsci.2016.06.010", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1e0ded2d071006ee4c87061eb7e142e534fcb6d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
266428598
pes2o/s2orc
v3-fos-license
A transient expression tool box for anthocyanin biosynthesis in Nicotiana benthamiana Summary Transient expression in Nicotiana benthamiana offers a robust platform for the rapid production of complex secondary metabolites. It has proven highly effective in helping identify genes associated with pathways responsible for synthesizing various valuable natural compounds. While this approach has seen considerable success, it has yet to be applied to uncovering genes involved in anthocyanin biosynthetic pathways. This is because only a single anthocyanin, delphinidin 3‐O‐rutinoside, can be produced in N. benthamiana by activation of anthocyanin biosynthesis using transcription factors. The production of other anthocyanins would necessitate the suppression of certain endogenous flavonoid biosynthesis genes while transiently expressing others. In this work, we present a series of tools for the reconstitution of anthocyanin biosynthetic pathways in N. benthamiana leaves. These tools include constructs for the expression or silencing of anthocyanin biosynthetic genes and a mutant N. benthamiana line generated using CRISPR. By infiltration of defined sets of constructs, the basic anthocyanins pelargonidin 3‐O‐glucoside, cyanidin 3‐O‐glucoside and delphinidin 3‐O‐glucoside could be obtained in high amounts in a few days. Additionally, co‐infiltration of supplementary pathway genes enabled the synthesis of more complex anthocyanins. These tools should be useful to identify genes involved in the biosynthesis of complex anthocyanins. They also make it possible to produce novel anthocyanins not found in nature. As an example, we reconstituted the pathway for biosynthesis of Arabidopsis anthocyanin A5, a cyanidin derivative and achieved the biosynthesis of the pelargonidin and delphinidin variants of A5, pelargonidin A5 and delphinidin A5. Introduction Anthocyanins are water-soluble pigments that give many fruits and flowers their colour.They are made in the cytosol through a series of enzymatic steps from the precursor phenylalanine and are stored in the vacuole, where, in some species, additional enzymatic steps also take place (Saito et al., 2013;Tanaka et al., 2008).The first coloured and stable products are the basic anthocyanins pelargonidin 3-O-glucoside (P3G), cyanidin 3-Oglucoside (C3G) and delphinidin 3-O-glucoside (D3G).In many plants, these basic anthocyanins undergo further modifications, such as glycosylation, acylation and methylation, which impact the colour and stability of the anthocyanins.While the biosynthetic pathways leading to the basic anthocyanins are well understood, there is still limited knowledge about the genes and enzymes involved in the synthesis of more complex anthocyanins. Investigating the function of genes putatively involved in anthocyanin biosynthesis usually requires the production of recombinant protein in E. coli.The purified enzymes are then tested in vitro by incubation with the expected substrate in a suitable reaction buffer.Finally, the reaction products are analysed by chromatography and mass spectrometry.Such a strategy works well to biochemically characterize one or a few genes, but is time-consuming as it requires cloning of the gene of interest in an expression construct, expressing the gene in E. coli and purifying the recombinant protein.It also requires having access to a specific substrate that may not always be commercially available, which then needs to be chemically synthesized.This strategy is therefore not well suited for quickly screening large numbers of candidate genes. Transient expression in Nicotiana benthamiana provides a powerful alternative for the identification of genes involved in the biosynthesis of interesting secondary metabolites.This method involves cloning coding sequences of candidate genes in an expression vector under the control of a robust constitutive promoter (the Cauliflower Mosaic Virus 35S promoter) and expressing them transiently in N. benthamiana leaves using agroinfiltration.The high efficiency of this process allows coexpression of multiple genes simultaneously using pools of agrobacterium strains, each strain for expression of a single candidate gene (Christ et al., 2019).One advantage of screening multiple genes in a single step is that it enables exploration of entire pathways, even in the absence of precise knowledge regarding the order of successive enzymatic steps.Moreover, this approach overcomes challenges associated with unstable intermediate metabolites by rapidly converting them to subsequent intermediates until the final, usually more stable product is synthesized.In situations where the initial precursor for the pathway of interest is lacking in N. benthamiana, it is possible to introduce a chemically synthesized substrate during or This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.shortly after infiltration (Fu et al., 2021).The versatility of the N. benthamiana expression platform has led to the rapid elucidation of several complex secondary metabolite pathways in recent years, significantly expediting research progress (Kwan et al., 2023). The N. benthamiana platform has, so far, not been used for the elucidation of anthocyanin biosynthetic pathways.This is because induction of anthocyanin biosynthesis by transcription factors in N. benthamiana does not work well, produces very low levels of anthocyanins and leads only to the production of delphinidin-3-O-rutinoside (D3R), a precursor for some, but not all, more complex anthocyanins (Hugueney et al., 2009;Outchkourov et al., 2014). In this study, we have devised a platform capable of producing any desired basic anthocyanin through the infiltration of a few essential constructs.Additionally, for synthesizing more complex anthocyanins, the platform accommodates the infiltration of species-specific biosynthetic genes.The strategy involves overexpressing certain anthocyanin biosynthetic genes that are either inadequately expressed or absent in N. benthamiana, while simultaneously silencing others that might impede the biosynthesis of some anthocyanins of interest.Therefore, the current investigation should be useful to enable the rapid, high-yield production of complex natural anthocyanins but also of novel anthocyanins not found in nature. Infiltration of transcription factors results in low-level anthocyanin biosynthesis Anthocyanin biosynthesis is regulated by a complex of transcription factors that include MYB, bHLH and WDR proteins (Koes et al., 2005;Ramsay and Glover, 2005).It has been reported that transient or stable expression of MYB and bHLH transcription factors is sufficient to enable anthocyanin biosynthesis in tissues where anthocyanins are not normally expressed (Butelli et al., 2008;Starkevic et al., 2015;Zhang et al., 2013).In N. benthamiana, transient expression of the Arabidopsis MYB transcription factor PAP1 was shown to lead to the biosynthesis of D3R (Hugueney et al., 2009).Co-expression of the MYB and bHLH transcription factors from Antirrhinum majus, Rosea1 and Delila also led to the biosynthesis of D3R (Outchkourov et al., 2014).Here, we have compared the efficiency of these transcription factors to induce anthocyanin biosynthesis in N. benthamiana leaves.The transient expression of Delila alone did not lead to any colour or anthocyanin biosynthesis in the infiltrated area.Expression of Rosea1 alone or PAP1 alone both led to a weak grey colour, and LC-MS analysis revealed a single peak with the mass of delphinidin-3-O-rutinoside (Figure 1).Expression of Delila together with Rosea1 or PAP1 also led to grey colour and biosynthesis of D3R, but the infiltrated areas became necrotic in some leaves.Therefore, infiltration of constructs for expression of transcription factors in N. benthamiana can lead to anthocyanin biosynthesis, but only of D3R.In addition, only a very low amount of this anthocyanin can be obtained. Co-infiltration of all anthocyanin biosynthetic genes leads to anthocyanin biosynthesis Achieving the biosynthesis of other anthocyanins should be feasible by transient expression of all biosynthetic genes in a given pathway without relying on transcription factors.To test this approach, we expressed all known biosynthetic genes necessary for the production of C3G.The pathway consists of 10 genes, including three genes for phenylpropanoid biosynthesis (phenylalanine ammonia-lyase, PAL; cinnamate-4-hydroxylase, C4H; 4coumarate:CoA ligase, 4CL) and seven genes for flavonoid biosynthesis (chalcone synthase, CHS; chalcone isomerase, CHI; flavanone 3-hydroxylase, F3H; flavonoid 3 0 -hydroxylase, F3 0 H; dihydroflavonol 4-reductase, DFR; anthocyanidin synthase, ANS; and anthocyanin 3-O-glucosyltrasferase, 3GT) (Saito et al., 2013;Shi and Xie, 2014) (Figure 2a).In addition, at least one additional gene, coding for a glutathione S-transferase (GST), is required for the transport and sequestration of anthocyanins to the vacuole (Sun et al., 2012).All genes were cloned from Arabidopsis by PCR amplification from cDNAs using gene-specific primers.The GST, which had been first characterized in maize and petunia (Alfenito et al., 1998;Marrs et al., 1995), was cloned from petunia.We also cloned an anthocyanin permease (AP) involved in the transport of proanthocyanidin precursors to the vacuole (TT12) to check whether it could also be used for anthocyanin transport to the vacuole.All genes were subcloned in expression vectors under control of the 35S promoter and were transiently expressed in N. benthamiana leaves by Agrobacterium-mediated delivery of the constructs.A reddish-brown colour appeared in the infiltrated area 3 days after infiltration and continued to develop in the next few days, suggesting successful anthocyanin biosynthesis (Figures 2b and 3a).LC-MS analysis revealed the presence of two peaks that showed the typical UV-absorbance of anthocyanins with a maximum of 510-520 nm, with [M + H] + ions at m/z 449 and 595 characteristic for C3G and cyanidin 3-O-rutinoside (C3R), respectively (Figures 2c and 3b).The presence of C3R, which has a rhamnose attached to the glucose residue of C3G (Figure 3c), indicates that some endogenous flavonoid biosynthesis genes from N. benthamiana are also expressed in the infiltrated leaf and contribute to anthocyanin formation.To test which genes may already be expressed endogenously, we infiltrated the complete set of genes, omitting each gene individually in 12 separate infiltrations (Figure S1).We then repeated the infiltrations with the minimal set of genes identified (Figure 2b).These data indicate that four genes are absolutely required: CHS, DFR, ANS and GST, and no anthocyanin is produced without them.Two genes are already expressed in the N. benthamiana genome (PAL and F3H ), but additional transient expression increases the amount of anthocyanin produced.One gene, F3 0 H, is not expressed in the N. benthamiana genome (as expected) and was used for the biosynthesis of C3G.Omitting F3 0 H led to the expression of pelargonidin 3-O-glucoside (P3G) and pelargonidin 3-O-rutinoside (P3R, Figure 2c; Figure 3a,b).Finally, five genes do not need to be transiently expressed as they are already sufficiently expressed in the N. benthamiana genome (C4H, 4CL, CHI and 3GT ) or are not necessary (potentially the AP ).In summary, seven genes need to be co-expressed for efficient production of C3G (PAL, CHS, F3H, F3 0 H, DFR, ANS and GST ).Biosynthesis of P3G is obtained by omitting the F3 0 H gene. For the biosynthesis of D3G, we replaced the F3 0 H gene with a flavonoid 3 0 5 0 hydroxylase (F3 0 5 0 H).Since Arabidopsis makes cyanidin-based anthocyanins and does not have a F3 0 5 0 H, we used a Petunia hybrida gene (Figure 3a).Surprisingly, six peaks were observed, with an UV-absorbance between 510-530 nm, characteristic for anthocyanins and with masses characteristic for pelargonidin glycosides, P3G 3b, mix 3).Apparently, the petunia F3 0 5 0 H is not specific for delphinidin and also leads to the production of pelargonidin and cyanidin.A campanula F3 0 5 0 H has been reported to lead to the selective accumulation of delphinidin derivatives in tobacco (Okinaka et al., 2003).Here, we cloned and tested a F3 0 5 0 H gene from Campanula persicifolia.The use of this gene gives rise to mostly delphinidins, D3G and D3R (Figure 3b, mix 4). As GST was the only gene not sourced from Arabidopsis in the previous experiment (except F3 0 5 0 H ), we also tested the GST from Arabidopsis (TT19) as an alternative to the petunia GST.Our findings revealed comparable results in terms of anthocyanin accumulation (Figure S2). It is known that DFR genes from various species exhibit different substrate specificities (Johnson et al., 2001).Therefore, we conducted a comparison between the DFR gene from Arabidopsis and from at least another species, tomato (Solanum lycopersicum).The DFR gene from Arabidopsis was suitable for the production of all basic anthocyanin glycosides.In contrast, pelargonidin glucosides were produced less efficiently using the DFR gene from tomato (Figure S2).This result is consistent with previous observations that indicate that the tomato DFR has a substrate preference for dihydromyricetin (Bovy et al., 2002;Butelli et al., 2021), although it can also act on dihydroquercetin, as shown here. Knockout of the N. benthamiana rhamnosyltransferase genes It would be useful to be able to produce anthocyanin 3-Oglucosides without the rutinoside derivatives.One or several rhamnosyltransferases (RhamTs) must be endogenously expressed in N. benthamiana leaves.A blast search of the N. benthamiana genome in the Sol Genomics Network database (https://solgenomics.net/) (Fernandez-Pozo et al., 2015) made using a petunia RhamT gene sequence as a query (GenBank X71059) identified two candidate genes on chromosome 4 and 17, with predicted cDNAs Niben101Scf00113g06004.1 and Niben101Scf02173g00001.1.Two transcripts, Nbv6.1trP56414 and Nbv6.1trP70740, were also identified in the N. benthamiana genome from the University of Queensland (https://benthgenome.qut.edu.au/)(Nakasugi et al., 2014).Both genes seem to be expressed in leaves, according to ATLAS (https://sefapps02.qut.edu.au/atlas/tREX6.php).An alignment of the four sequences shows that the same two genes were identified in both databases, even though the sequences are not identical, potentially as a result of sequencing errors or from being incomplete sequences (Figures S3 and S4). Mutations in both genes were generated by the transformation of a CRIPSR construct (pAGM44963) in the N. benthamiana standard lab strain.A line lacking Cas9 and with mutations in all four alleles, line Nb 29-2, was identified in the progeny of one of the primary transformants.The plant analysed had a single nucleotide (nt) insertion (an A) and a 49 nt deletion in homologues of the gene for transcript Nbv6.1trP56414 and a single nucleotide insertion (an A) and a 7 nt deletion in homologues of the gene for transcript Nbv6.1trP70740 (Figure S5).Since all alleles have mutations that introduce frameshifts in their sequence, all progeny plants should have a null phenotype as well.As expected, infiltration of this line with constructs for biosynthesis of all three basic anthocyanins produced the anthocyanin glucosides without the rutinoside derivatives (Figure 3d,e).In the chromatograms of all WT plants, UV-signals and the related [M + H] + ions at m/z 579, 595 and 611 represent the rutinosides of pelargonidin, cyanidin and delphinidin (Figure 3b).These signals are missing in the infiltrations performed in the CRISPR line. Use of transcription factors for high-level production of anthocyanins As shown above, co-expression of all genes in a pathway can lead to the production of all basic anthocyanins.This protocol is, Enzymes that need to be transiently expressed for high-level C3G biosynthesis are highlighted in yellow.The enzymes that do not require transient expression are highlighted in grey.Additional biosynthetic enzymes, such as glucosyltransferases (GT), acyltransferases (AT) and methyltransferases (MT), are required for the biosynthesis of more complex anthocyanins.(b) N. benthamiana leaves infiltrated with mixes of Agrobacterium strains for expression of anthocyanin biosynthetic genes, 6 days after infiltration.The leaves were infiltrated with agrobacterium strain mixes for expression of all 13 genes tested (mix a + b, with mix a for expression of At Pal, At CHS, At F3H, At F3 0 H, At ANS, At DFR, Ph GST and mix b for expression of At C4H, At 4CL, At CHI, At 3GT, At AP) or for expression of a minimal set of genes (mix a).The leaves were also infiltrated with Agrobacterium strain mixes similar to mix a, but lacking one of the genes, as indicated.however, not very robust and is dependent on the growing conditions and the time of year (for plants grown in a greenhouse).In some cases, very low anthocyanin production is obtained, and anthocyanins are barely visible on the leaf.It would be useful to be able to more efficiently induce the anthocyanin biosynthetic pathway using transcription factors, as not only biosynthetic pathway genes are induced by Delila and Rosea1, but also upstream genes involved in the biosynthesis of the precursors phenylalanine and malonyl CoA (Outchkourov et al., 2018).However, as seen in Figure 1, the use of the transcription factors Delila, Rosea1 or PAP1 only resulted in a low amount of anthocyanin produced.What could be the reason for that? We hypothesized that one of the genes that should be induced by Rosea1 and Delila must be limiting and may either not be induced or be weakly induced in N. benthamiana, or may have a mutation lowering its function.To find out which one, we infiltrated wildtype N. benthamiana plants with the construct for expression of Rosea1 (which led to the highest amount of anthocyanin production in figure 1) together with each of the seven biosynthetic genes that we have identified as required for anthocyanin production, each one in a separate infiltration.A very clear result shows that expression of ANS is the problem and that co-infiltration of a heterologous ANS with Rosea1 solves the problem (Figure 4a).LC-MS analysis shows that the anthocyanin produced is still D3R (Figure 4b,c).Infiltration of constructs for expression of Rosea1 and ANS on N. benthamiana plants from line Nb 29-2 resulted in the production of D3G.Therefore, co-expression of Rosea1 and Arabidopsis ANS in the leaves of Nb 29-2 benthamiana plants or wildtype plants can lead to high levels of production of pure D3G or D3R, respectively. and biosynthesis of all basic anthocyanins To produce basic anthocyanins other than D3G using transcription factors, it would be necessary to have a line lacking F3 0 5 0 H enzymatic activity.Such a line would be expected to produce only the pelargonidin glycoside after expression of Rosea1 and ANS (Figure 2a).Additional expression of a heterologous F3 0 H gene would be expected to lead to the formation of the cyanidin glycoside. Two F3 0 5 0 H homologues were identified in the N. benthamiana genome on chromosomes 1 and 2 using a blast search of the Sol Genomics database with the petunia F3 0 5 0 H gene (sequence of pAGM10493) as a query.Two predicted cDNAs, Niben101Scf14625g02006.1 and Niben101Scf03963g01002.1, were picked out.The Niben101Scf03963g01002.1 coding sequence is present in two reading frames.There is probably an error in this sequence, as an alignment of the genomic sequence with the two introns manually removed indicates a single reading frame for both homologues (Figure S6).A CRISPR construct containing a single guide RNA targeting both homologues, pAGM70984, was transformed in the N. benthamiana line Nb 29-2 lacking RhamT activity.Primary transformants were screened by infiltration of constructs for expression of Rosea1 and ANS.Surprisingly, the infiltration did not lead to visible anthocyanin formation in 7 out of 10 tested transformants (Figure S7).This is probably because the DFR gene from N. benthamiana cannot use dihydrokaempferol efficiently as a substrate, as is the case for the DFR genes of other plants in the Solanaceae family.Co-infiltration of the same plants with Rosea1 and ANS together with a construct for expression of the campanula F3 0 5 0 H restored delphinidin biosynthesis, confirming that the two F3 0 5 0 H homologues were indeed knocked out and not another gene involved in anthocyanin biosynthesis. Sequencing of a PCR product amplified from one of the two target genes from genomic DNA extracted from positive plants revealed a mix of sequences indicative of the presence of mutations at this locus (not shown).Unfortunately, the mutant plants were found to be sterile and could not be used further. As an alternative, we tried to silence both homologues by RNAi using a hairpin construct made using a 298 bp DNA fragment from one of the two N. benthamiana F3 0 5 0 H genomic sequences (Figure S6).Infiltration of constructs for expression of Rosea1, Arabidopsis ANS and the hairpin construct led to the absence of anthocyanin biosynthesis (Figure 5a-c, mix 1).Infiltration of the same constructs together with a DFR gene from Pelargonium zonale led to P3G biosynthesis (Figure 5b,c, mix 2), confirming the hypothesis that the DFR from N. benthamiana cannot use or does not use dihydrokaempferol efficiently as a substrate.Infiltration of Rosea1 and Arabidopsis ANS, F3 0 H and DFR led to a high level of C3G (Figure 5b,c, mix 3), and to a lower level of C3G when Arabidopsis DFR was omitted (Figure S8).When the same construct combinations were infiltrated on wildtype N. benthamiana plants, the corresponding rutinoside derivatives were obtained (Figure 5e).In summary, we can produce high levels of the three basic anthocyanins by infiltrating defined combinations of 2-5 constructs in the leaves of N. benthamiana line 29-2. Anthocyanin production of the three basic anthocyanins using transcription factors was more robust and reproducible than when using only pathway genes, as previously shown in figure 3.For example, a high amount of anthocyanin was visible on all three leaves of infiltrated plants (Figure S9a).Also, infiltration of the same construct combinations for production of the three basic anthocyanins in different batches of plants and at different dates repeatedly led to high-level anthocyanin biosynthesis (Figure S9b).The amount of C3G that was produced in multiple experiments (Figures 5-7; Figures S8 and S9) was quantified using a standard curve made with a C3G standard.It ranged from 1306 to 3515 lM (0.6 to 1.7 mg/g FW), with an average of 2523 lM (1.22 mg/g FW).Anthocyanin biosynthesis in N. benthamiana 1243 Biosynthesis of complex anthocyanins Anthocyanins usually show a more complex substitution pattern than simple 3-O-glycosylation (Saito et al., 2013).For example, Arabidopsis anthocyanins typically consist of a mix of compounds that contain a C3G backbone linked with up to three sugar residues and three acyl groups (Saito et al., 2013;Shi and Xie, 2014;Tohge et al., 2005).We have tried here to reconstitute the biosynthetic pathway for Arabidopsis anthocyanins. One of the Arabidopsis anthocyanins, A5 (cyanidin , requires the expression of four genes (UGT79B1/At5g54060, UGT75C1 also noted At 5GT/At4g14090, At 3AT1/At1G03940 and At 5MAT/At3g29590) for incorporation of two sugars (glucose and xylose) and two acyl groups (coumaroyl and malonyl, Figure 6c).Co-infiltration of constructs for induction of anthocyanin biosynthesis (Rosea1), silencing of the endogenous F3 0 5 0 H genes and expression of additional Arabidopsis biosynthetic genes (ANS, F3 0 H, DFR and the four genes necessary for A5 biosynthesis) led to strong coloration.LC-MS analysis revealed two peaks with a molecular mass of 975 [M + H] + , consistent with the mass of the anthocyanin A5 (Figure 6b, mix 2).The earlier eluting peak was present in a lower amount and is proposed to be a cis-coumaroyl isomer of A5, while the second, more prominent peak, should be the trans isomer (Rowan et al., 2009). Co-infiltration of one additional glucosyltransferase gene to add a glucose residue to A5 (At Bglu10/AT4G27830) led to a peak with a molecular ion [M + H] + at m/z 1137, consistent with anthocyanin A8, as expected (Figure 6b, mix 3).A8 was, however, detected in a very small amount, and the majority of anthocyanin produced was still A5.The addition of yet one more gene for the addition of a 'final' sinapoyl residue (At SAT/At2G23000) did not lead to the biosynthesis of the expected anthocyanin, A11, but gave rise to the same pattern as obtained without this construct (Figure 6b, mix 4). The biosynthesis of anthocyanin A5 in high amounts provided the opportunity to try to generate novel anthocyanins.We infiltrated sets of constructs to produce the pelargonidin and delphinidin versions of A5 (Figure 7).These would have the same decoration pattern as A5, but on pelargonidin and delphinidin backbones.Consistent with our expectations, infiltration of the constructs indeed led to the biosynthesis of the expected anthocyanins, with the corresponding molecular ions [M + H] + at m/z of 959 and 991 for pelargonidin A5 and delphinidin A5, respectively (Figure 7b, mixes 2 and 4).In all three infiltrations, a major peak and a minor peak were observed, corroborating the two expected isomers.Delphinidin A5 is a novel anthocyanin that has not been previously described. Discussion The set of tools presented here allow us to reconstitute anthocyanin biosynthetic pathways in just a few days once all needed genes are cloned.Subcloning of biosynthetic genes in an expression construct can be done extremely easily since a binary vector that already contains a promoter (the 35S promoter for high expression in transient assays) and a terminator (the Nos terminator), pAGM53151, is available (Addgene #169093).Cloning of the genes in this vector is usually done with Golden Gate cloning using the enzyme BsaI, but can also be done by homology-based cloning of a PCR product using Gibson assembly or any alternative method such as SLIC or TEDA (Xia et al., 2019). In this work, we were able to produce the basic anthocyanins P3G, C3G and D3G and their rhamnosylated derivatives.Production of C3G using transcription factors was on average 1.2 mg/g of fresh weight, reaching an amount found in some anthocyanin-rich fruits (Avula et al., 2023).Production of the basic anthocyanins requires infiltrating two constructs for D3G, four constructs for P3G and five constructs for C3G.One of the constructs that is needed harbours an anthocyanidin synthase gene, as the endogenous enzyme from the standard N. benthamiana lab strain is not working properly.The lab strain is able to express an enzyme with ANS activity, as a low amount of the rutinoside delphinidin is produced after infiltration of the MYB transcription factors Rosea1 or PAP1.However, much higher anthocyanin amounts can be produced by transiently coexpressing a heterologous ANS gene together the MYB gene.The ANS of the lab strain is either expressed at a too low level, or one or both of the potential homologues may contain one or several mutations.A blast search of the N. benthamiana transcriptome of the University of Queensland (https://benthgenome.qut.edu.au/) using the Arabidopsis ANS as a query identified two transcripts annotated as anthocyanidin synthase, Nbv6.1trP1132 and Nbv6.1trP59416.Nbv6.1trP59416 contains a frameshift in coding sequences and seems to be inactive.Further work will be required to understand which gene contributes to ANS activity in the lab strain and if all the homologues are functional or not.It has recently been reported that the N. benthamiana cultivars Northern Territory and QLD are able to produce anthocyanins at a higher level than the standard lab strain (Bally et al., 2015;Ranawaka et al., 2023;Thole et al., 2019).It should be possible to generate a RhamT mutant in any of these backgrounds, as we have done for the standard lab strain, to produce anthocyanins by transient expression without the need for co-infiltrating an ANS construct.Alternatively, the N. benthamiana lab strain could be stably transformed with a construct for ANS expression to make a line suitable for high-level anthocyanin production without the need for co-expression of an ANS construct.As a simpler alternative, we plan to subclone all genes needed for expression of the basic anthocyanins, including ANS, in a single T-DNA construct.This will result in three constructs, each one for the production of a basic anthocyanin.With such constructs, it should be possible to produce any basic anthocyanin by infiltration of a Production of more complex anthocyanins can be performed by co-infiltrating constructs carrying additional pathway genes.We were able to produce Arabidopsis anthocyanin A5 at a high level and in relatively pure form by expressing four genes in addition to the genes required for the production of C3G.Production of anthocyanin A8 was also successful, but only a very low amount was produced.Two alternative explanations are plausible.First, the glucose that was added to A5 is normally added as the last step in the biosynthesis of the major Arabidopsis anthocyanin A11.In natural conditions, a sinapoyl residue is first attached to A5, leading to anthocyanin A9, which is then converted to A11 by the addition of a final glucose residue (Miyahara et al., 2013;Yonekura-Sakakibara et al., 2012).It is therefore possible that the addition of the glucose residue would be more efficient using A9 as a substrate rather than A5.However, A9 could not be produced at all.This is because the acyl sugar donor required for the biosynthesis of A9 from A5 by SAT is sinapoylglucose (Fraser et al., 2007), which is likely not present in N. benthamiana.The second explanation why only a small peak of A8 was obtained is that the last two steps of A11 anthocyanin biosynthesis take place in the vacuole by vacuolar-localized enzymes.Unlike cytosolic UGTs that are catalysed by UDP-sugardependent UGTs, vacuolar glycosides are synthesized by acylglucose-dependent anthocyanin UGTs (Sasaki et al., 2014).In the case of Bglu10, the acyl sugar donor is normally sinapoylglucose, but the enzyme is known to also accept other acyl sugar donors (Miyahara et al., 2013).Therefore, the small amount of A8 obtained by expression of At Bglu10 was likely synthesized using a different type of acyl sugar donor, but less efficiently than if sinapoylglucose had been used. The failure to make anthocyanin A11 indicates that the lack of some specialized metabolite precursors in N. benthamiana may be a limitation for reconstitution of the biosynthetic pathways of other complex anthocyanins.There are, however, potential solutions for this problem.Indeed, it should be possible to transiently express genes required for the biosynthesis of any specialized metabolite that is needed as a substrate, and we will test this in future work.Alternatively, the delivery of some chemically synthesized precursors by infiltration or even a combination of both approaches, that is the delivery of some chemical precursors and some biosynthetic enzymes, could be sufficient to engineer the biosynthesis of any needed substrate (Kwan et al., 2023). Transient expression of transcription factors and/or biosynthetic genes may not always be sufficient for reconstitution of heterologous biosynthetic pathways, as sometimes the endogenous metabolism of the host plant interferes with the biosynthesis of the desired product.In the case of anthocyanin biosynthesis, a RhamT expressed endogenously in N. benthamiana leaves led to the addition of a rhamnose to all basic anthocyanin-3-Oglucosides, preventing the biosynthesis of more complex anthocyanins that should not contain this sugar at this position.Infiltration of all genes necessary for the biosynthesis of the basic anthocyanins without using any transcription factor produced anthocyanin 3-O-glucosides, but only as part of a mix that also contained rutinoside derivatives.The ratio between the two peaks appears to be developmentally regulated, as the ratio varied in different leaves of the same plant (not shown).This suggests that the RhamT-encoding gene(s) are expressed endogenously before infiltration.These genes are also strongly induced by the expression of Rosea1, as only the rhamnosylated version of delphinidin was produced when using the transcription factors PAP1 or Rosea1.Two strategies were tested to remove the RhamT activity.The first one consisted of silencing the two RhamT-encoding genes by transient expression by RNAi using a hairpin construct.This worked quite well, but remnants of rhamnosylation were detected in the anthocyanins produced (Figure S10).In contrast, when both RhamT-encoding genes were inactivated by CRISPR mutagenesis, rhamnosylation was completely eliminated in knockout plants.The residual RhamT activity that was observed when using a hairpin construct for silencing may come from a pool of RhamT enzyme already present in the leaf before infiltration, which, of course, could not be removed by silencing.Therefore, CRISPR mutagenesis, in this case, provided a more complete solution. Other genes potentially interfering with anthocyanin biosynthesis are endogenous N. benthamiana F3 0 5 0 H genes that are induced by the expression of Rosea1.Expression of these genes is not a problem for making delphinidin-based anthocyanins and is even beneficial for this purpose, but it must be prevented to produce other anthocyanins.Unlike with the RhamT encoding genes, F3 0 5 0 H activity could be completely suppressed by transient expression of a silencing hairpin construct, probably because the F3 0 5 0 H genes were not expressed before induction by Rosea1 transient expression.This is corroborated by the fact that expression of all biosynthetic genes to produce C3G minus the F3 0 H did not lead to the biosynthesis of D3G but only led to the formation of P3G and P3R (Figure 2).Being able to use a hairpin construct for silencing rather than CRISPR mutagenesis provides more flexibility/convenience for the user, as it does not require maintaining an additional N. benthamiana line in the greenhouse.Therefore, only the N. benthamiana Nb 29-2 line is necessary for producing any basic anthocyanin of choice, as well as some more complex anthocyanins.The standard wildtype N. benthamiana lab strain can still be used to make anthocyanins based on rhamnosylated basic anthocyanins or to make nonrhamnosylated anthocyanins by co-infiltration of the hairpin constructs pAGM81975 or pAGM81987 to transiently silence (but not completely) RhamT genes. In conclusion, the system presented here should allow users to make any basic anthocyanin of choice, to produce complex anthocyanins found in other plant species and to engineer novel anthocyanins not found in nature.This system should be useful to help identify genes involved in the biosynthesis of complex anthocyanins for pathways that are not yet elucidated. Generation of constructs Gene coding sequences were amplified from cDNAs using genespecific primers and cloned as MoClo level 0 modules in plasmid pICH41308 using BpiI as previously described (Engler et al., 2014).Restriction sites for BsaI and BpiI were removed from internal sequences during cloning using primers to introduce silent mutations.All level 0 modules were sequenced.A list of all level 0 modules and level 1 constructs used for transient expression is shown in Figure S11.The sequence of all level 0 modules is given in Figure S12. Two other DFR-coding sequences were cloned from Pelargonium zonale and tomato.The pelargonium sequence was amplified from P. zonale petal cDNAs with primers designed from GenBank sequence AB534774, resulting in level 0 module plasmid pAGM47851.The tomato DFR coding sequence was amplified from tomato leaf cDNAs with primers designed from GenBank sequence Z18277 (Bongue- Bartelsman et al., 1994), resulting in level 0 module plasmid pAGM17641. A petunia F3 0 5 0 H coding sequence was cloned from cDNA prepared from petunia purple/blue flowers.PCR primers were designed using GenBank sequence Z22545.1 (Holton et al., 1993).The cloned sequence was, however, found to be identical the F3 0 5 0 H sequence from GenBank sequence EF371021.The plasmid containing the cloned level 0 module is pAGM10493.A F3 0 5 0 H sequence was also cloned from Campanula persicifolia.The 3 0 end of the sequence was cloned by amplification from petal cDNAs using primers designed from the Campanula medium F3 0 5 0 H sequence (GenBank HW349464, primers bluecyp20, tt ggtctc a ACAA agccttctgtttctgccaatgacttgg) and bluecyp24, tt ggtctc a ACAA aagcctagacagtgtaagcacttggagg).The 5 0 end of the sequence could not be amplified using primers designed from the C. medium sequence and was determined using 5' RACE.The complete sequence, with BsaI and BpiI restriction sites removed using silent mutations, was cloned as a level 0 module, resulting in plasmid pAGM44931. A construct for silencing of the two N. benthamiana F3 0 5 0 H genes was made by amplifying a F3 0 5 0 H fragment from cDNA prepared from a N. benthamiana leaf infiltrated with pAGM45244 (Rosea1) and pAGM10775 (At ANS) 3 days before RNA extraction.Amplification of sense and antisense fragments was performed with primers F35s5 (tt gaagac aa CTCA AATG gTGGCCGGTGATCGGCGCACTAC) and F35s6 (tt gaagac aa CTCG ACCT tACATTTGCCCAATTTTCTAAGGCTTTTCCC), and F35s3 (tt gaagac aa CTCA CAGG TACATTTGCCCAATTTTCT AAGGCTTTTCCC) and f35s4 (tt gaagac aa CTCG AAGC GT GGCCGGTGATCGGCGCACTAC).Both PCR products were cloned using BpiI into cloning vector pAGM9121 (Addgene Plasmid #51833), and the resulting clones, pAGM81735 and pAGM81750, were sequenced.pAGM81735 and pAGM81750 were then assembled with a spacer sequence in a binary vector construct with the 35S promoter and the Nos terminator, resulting in construct pAGM81963 (sequence in Figure S13). The constructs needed for producing anthocyanins in N. benthamiana will be deposited at Addgene.These include pAGM45244, pAGM10775, pAGM10764, pAGM54418, pAGM10440, pAGM81963, pAGM81975 and pAGM81987 for making basic anthocyanins from either the N. benthamiana wildtype or line 29-2.In addition, plasmids for making Arabidopsis-specific anthocyanins, pAGM13414, pAGM13425, pAGM13437, pAGM19465, pAGM19477 and pAGM13451, are also available, as well as the empty level 1 expression vector containing the 35S promoter and Nos terminator, pAGM53151.The RhamT-deficient N. benthamiana line 29-2 is available on request. CRISPR mutagenesis Constructs for CRISPR mutagenesis were designed using an intronized Cas9 sequence as previously described (Grutzner et al., 2021).A first CRISPR construct was designed to target the two RhamT homologues identified in the N. benthamiana genome.A target site present in both genes (rtd1, ggaggtagaccttcaacttg agg) was selected using CHOPCHOP (https://chopchop.cbu.uib.no/).The resulting CRISPR construct, pAGM44963, was transformed into the N. benthamiana standard lab strain.Transgenic lines with active Cas9 were selected by PCR amplification with two primers for sequences present in both genes, nibrt1 (ctcctactttggcttctccatgtc) and nibrt11 (cccttgagtcc tgagagtacac), and by sequencing the PCR product with gene-spec ific primers nibrt7 (cacacgctcaatattgtactacag) and nibrt8 (cactac aacacataattatactacagaatc). To identify all mutations present in positive lines, a PCR product was amplified from genomic DNA with primers in sequences conserved in both homologues, nibrt9-p2: catttacaattatcgatac agctcaccattcattctacctacc and nibrt10-p2: gcttgactctagaggatc gtaggaccgccatggaagctcttg (sequences in italics are 5 0 extensions with homology to a cloning vector).The PCR product was cloned into vector pAGM71445 by homologybased cloning, and 10 clones per plant were sequenced and analysed. A second CRISPR construct was designed to target the two F3 0 5 0 H homologues identified in the N. benthamiana genome.A single guide RNA targeting both homologues (TGTGGCATGG CTCCTAAGTATGG) was selected using CRISPOR (http://crispor.tefor.net/).The resulting construct, pAGM70984, was transformed into N. benthamiana line Nb 29-2.To identify plants with mutations at the target sites, a PCR product was amplified from one of the two target genes with primers site49F (ggtgttatttactgagcttactatagcag) and site49R (cttgtgcattataggccaaatgggtg) from genomic DNA extracted from the transformants. Transient expression in N. benthamiana Constructs were transformed into Agrobacterium strain GV3101: pMP90.The transformed Agrobacterium strains were grown at 28 °C in LB medium supplemented with rifampicin and either carbenicillin for level 1 constructs or kanamycin for level 2 constructs (all at 50 lg/mL).The cultures were diluted to an OD600 of 0.2 in an infiltration solution containing 10 mM MES pH 5.5 and 10 mM MgSO 4 and infiltrated in the leaves of greenhouse-grown N. benthamiana plants using a syringe without a needle.The three main leaves of 6-8-week-old plants (depending on season) were infiltrated, usually resulting in the highest expression level in the upper and middle leaves.The best of the three leaves, the upper or middle leaf, was used for analysis.For co-expression of several constructs, all Agrobacterium strains at an OD600 of 0.2 in infiltration solution were mixed in equal amounts before infiltration. Anthocyanin analysis and quantification For all samples, 12 mg of leaf tissue were harvested in 2 mL tubes and frozen in liquid nitrogen.Three steel beads were added per tube, and the samples were ground two times for 30 s at 30 Hz with an electric mill.One hundred and twenty microliters of methanol buffer (50% methanol, 1 mM ascorbic acid and 0.5% formic acid) was added, and the samples were vortexed and then incubated on ice for 15 min.The samples were spun at 13 000 rpm for 10 min at 4 °C.The supernatant was centrifuged one more time.Two to ten microliters the supernatant was used for the HPLC analysis. Extracts were analysed by reversed-phase HPLC/ESI-MS on a Nucleoshell RP 18 100-3 mm column (Macherey-Nagel, D€ uren, Germany) at a flow rate of 0.45 mL/min with either one of two different linear gradient settings depending on the anthocyanins analysed: (a) for experiments with only basic anthocyanin 3-Oglycosides like P3G, C3G, D3G and C3R, a gradient from 2% solvent B (acetonitrile) up to 20% B in solvent A (0.1% trifluoracetic acid) at a rate of 1%/min; and (b) for experiments with anthocyanins with a complex substitution pattern, like A5, a gradient from 5% solvent B (acetonitrile) up to 30% B in solvent A (0.1% trifluoracetic acid) within 20 min at a rate of 1.25%/min.Separation was performed on an Alliance e2695 chromatography system (Waters, Eschborn, Germany), equipped with a Waters 2996 photodiode array and a Waters QDA mass detector, respectively.Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 400 to 700 (basic anthocyanin glycosides) and m/z 200 to 1200 (complex anthocyanins) in positive ionization mode, cone voltage set at either 15 or 20 V, respectively, both analysed using the Empower 3 software (Waters).As a reference, standards of P3G, C3G, C3R and D3G were obtained from Merck (Darmstadt, Germany). For anthocyanin quantification, a standard curve was made by running different concentrations of a commercially available C3G standard, giving the formula Y = 9810 X À 3790, with Y being the integrated area of the C3G peak and X the concentration of the sample in lM. 1238 ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd. Figure 1 Figure 1 Transient expression of PAP1, Rosea1 and Delila in Nicotiana benthamiana leaves.(a) N. benthamiana leaves infiltrated with mixes of Agrobacterium strains for expression of Rosea1 and Delila (mix 1), PAP1 and Delila (mix 2), Rosea1 (mix 3), PAP1 (mix 4) and Delila (mix 5), 6 days after infiltration.The two pictures show two leaves of the same plant infiltrated with the same Agrobacterium strain mixes, with the lower leaf being lower on the plant.(b) LC-MS analysis of infiltrated tissue extracted from the upper leaf shown in (a).Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 400 to 700 in positive ionization mode [M + H] + . Figure 2 Figure 2 Anthocyanin biosynthesis in Nicotiana benthamiana leaves by transient expression of biosynthetic genes.(a) The anthocyanin biosynthetic pathway starts with phenylalanine.Enzymes that need to be transiently expressed for high-level C3G biosynthesis are highlighted in yellow.The enzymes that do not require transient expression are highlighted in grey.Additional biosynthetic enzymes, such as glucosyltransferases (GT), acyltransferases (AT) and methyltransferases (MT), are required for the biosynthesis of more complex anthocyanins.(b) N. benthamiana leaves infiltrated with mixes of Agrobacterium strains for expression of anthocyanin biosynthetic genes, 6 days after infiltration.The leaves were infiltrated with agrobacterium strain mixes for expression of all 13 genes tested (mix a + b, with mix a for expression of At Pal, At CHS, At F3H, At F3 0 H, At ANS, At DFR, Ph GST and mix b for expression of At C4H, At 4CL, At CHI, At 3GT, At AP) or for expression of a minimal set of genes (mix a).The leaves were also infiltrated with Agrobacterium strain mixes similar to mix a, but lacking one of the genes, as indicated.(c) LC-MS analysis of infiltrated tissues of infiltrations shown in (b).Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 400 to 700 in positive ionization mode [M + H] + . Figure 2 Anthocyanin biosynthesis in Nicotiana benthamiana leaves by transient expression of biosynthetic genes.(a) The anthocyanin biosynthetic pathway starts with phenylalanine.Enzymes that need to be transiently expressed for high-level C3G biosynthesis are highlighted in yellow.The enzymes that do not require transient expression are highlighted in grey.Additional biosynthetic enzymes, such as glucosyltransferases (GT), acyltransferases (AT) and methyltransferases (MT), are required for the biosynthesis of more complex anthocyanins.(b) N. benthamiana leaves infiltrated with mixes of Agrobacterium strains for expression of anthocyanin biosynthetic genes, 6 days after infiltration.The leaves were infiltrated with agrobacterium strain mixes for expression of all 13 genes tested (mix a + b, with mix a for expression of At Pal, At CHS, At F3H, At F3 0 H, At ANS, At DFR, Ph GST and mix b for expression of At C4H, At 4CL, At CHI, At 3GT, At AP) or for expression of a minimal set of genes (mix a).The leaves were also infiltrated with Agrobacterium strain mixes similar to mix a, but lacking one of the genes, as indicated.(c) LC-MS analysis of infiltrated tissues of infiltrations shown in (b).Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 400 to 700 in positive ionization mode [M + H] + . Figure 3 Figure 3 Biosynthesis of the basic anthocyanins in Nicotiana benthamiana leaves by transient expression of biosynthetic genes.(a) Leaf of a N. benthamiana wildtype plant infiltrated with mixes of agrobacterium strains for biosynthesis of P3G (infiltration 1), C3G (infiltration 2) and D3G (infiltrations 3 and 4), 6 days after infiltration.The components of the agrobacterium strain mixes 1-4 are detailed in (b).(b) LC-MS analysis of infiltrated tissues shown in (a).(c) Structure of P3G/P3R (R1 = R2 = H), C3G/C3R (R1 = OH, R2 = H) and D3G/D3R (R1 = R2 = OH), with the structure of the anthocyanin 3-O-glucosides shown on the left and the rutinosides on the right.(d) Infiltration of the same Agrobacterium strain mixes as in (a) but in a leaf of a plant of the N. benthamiana line 29-2, which lacks RhamT activity.(e) LC-MS analysis of infiltrated tissues shown in (d).Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 400 to 700 in positive ionization mode [M + H] + . Figure 4 Figure 4 Anthocyanin biosynthesis in Nicotiana benthamiana leaves by co-expression of Rosea1 and selected anthocyanin biosynthetic genes.(a) Leaves of N. benthamiana wildtype infiltrated with an Agrobacterium strain for expression of Rosea1 alone (neg) or with strains for expression of Rosea1 and one of the anthocyanin biosynthetic gene, as indicated, 6 days after infiltration.(b) Leaves of N. benthamiana wildtype (left) or line Nb 29-2 line (right) infiltrated with strains for expression of Rosea1 and At ANS. (c) LC-MS analysis of infiltrated tissues of infiltrations shown in (b).Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 400 to 700 in positive ionization mode [M + H] + . Figure 5 Figure 5 Biosynthesis of the basic anthocyanins in Nicotiana benthamiana leaves by co-expression of Rosea1 and selected biosynthetic genes.(a) Schematic representation of the constructs present in Agrobacterium strains infiltrated in leaves shown in (b) and (d).(b) Leaf of a N. benthamiana wildtype plant infiltrated with Agrobacterium strain mixes 1-4 as detailed in (a).(c) LC-MS analysis of infiltrated tissues shown in (b).(d) Leaf of a plant from N. benthamiana line Nb 29-2 infiltrated with the same Agrobacterium strain mixes (1-4) as in (b).(e) LC-MS analysis of infiltrated tissues shown in (d).Compounds were identified by UV/VIS in maxplot detection from 280 to 600 nm and ESI-MS between m/z 200 to 1200 in positive ionization mode [M + H] + . ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1238-1250 ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1238-1250 Figure S1 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S2 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S3 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S4 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S5 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S6 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S7 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963. Figure S12 Figure S1 Infiltration of different combinations of pathway genes.Figure S2 Comparison of Petunia and Arabidopsis GST and of Arabidopsis and tomato DFR.Figure S3 Nucleotide alignment of RhamT gene sequences.Figure S4 Nucleotide alignment of RhamT gene sequences.Figure S5 Chromosomal mutations in RhamT genes in Nicotiana benthamiana lines 29-2.Figure S6 Alignment of Nicotiana benthamiana F3 0 5 0 H cDNAs.Figure S7 Screening of Nicotiana benthamiana F3 0 5 0 H CRISPR lines.Figure S8 Biosynthesis of C3G and C3R in Nicotiana benthamiana line 29-2 and wildtype with and without heterologous expression of At DFR. Figure S9 Reproducible high-level production of basic anthocyanins.Figure S10 Silencing of Nicotiana benthamiana RhamT genes by transient expression.Figure S11 List of MoClo level 0 modules and corresponding level 1 constructs used for transient expression.Figure S12 Sequence of MoClo level 0 modules.Figure S13 Sequence of construct for silencing Nicotiana benthamiana F3 0 5 0 H genes, pAGM81963.
2023-12-22T06:17:56.733Z
2023-12-20T00:00:00.000
{ "year": 2023, "sha1": "2cff16fe1bff60aa472473f003a88e7200804de2", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/pbi.14261", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "814a636f6f671a6706027e94c0a391b0da159f00", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
216110203
pes2o/s2orc
v3-fos-license
Reconsidering the Role of Melatonin in Rheumatoid Arthritis. Rheumatoid arthritis (RA) is an inflammatory joint disorder characterized by synovial proliferation and inflammation, with eventual joint destruction if inadequately treated. Modern therapies approved for RA target the proinflammatory cytokines or Janus kinases that mediate the initiation and progression of the disease. However, these agents fail to benefit all patients with RA, and many lose therapeutic responsiveness over time. More effective or adjuvant treatments are needed. Melatonin has shown beneficial activity in several animal models and clinical trials of inflammatory autoimmune diseases, but the role of melatonin is controversial in RA. Some research suggests that melatonin enhances proinflammatory activities and thus promotes disease activity in RA, while other work has documented substantial anti-inflammatory and immunoregulatory properties of melatonin in preclinical models of arthritis. In addition, disturbance of the circadian rhythm is associated with RA development and melatonin has been found to affect clock gene expression in joints of RA. This review summarizes current understanding about the immunopathogenic characteristics of melatonin in RA disease. Comprehensive consideration is required by clinical rheumatologists to balance the contradictory effects. Introduction Rheumatoid arthritis (RA) is an autoimmune disease that is characterized by synovial proliferation and inflammatory responses, the presence of autoantibodies including rheumatoid factor and anti-citrullinated protein antibodies (ACPA) in sera, cartilage, and bone erosion with deformity, and co-occurring health conditions such as cardiovascular disease events, pulmonary, psychological, and metabolic bone disorders [1]. Proinflammatory cytokines that mediate the progression of RA disease include tumor necrosis factor alpha (TNF-α), interleukin 1 beta (IL-1β), and IL-6. Current international guidelines for patients with early RA recommend starting disease-modifying antirheumatic drugs (DMARDs) as soon as possible, with methotrexate being the preferred choice [2]. Methotrexate is usually supplemented with short-term, low-dose oral or intra-articular glucocorticoids (GCs) for fast relief of pain and swelling and for arresting the inflammatory process. GCs must be carefully managed to prevent their inappropriate use and tapered as soon as possible to avoid long-term adverse effects [2]. The highly efficacious biologic DMARDs targeting the proinflammatory cytokines and Janus kinase inhibitors are intended for patients with persistently active and bone differentiation [23]. Moreover, while RORα is not a receptor for melatonin or its metabolites, the constitutive activity of RORα may be modulated by membrane melatonin receptors [21,23]. Melatonin also displays antioxidant and anti-inflammatory activity, depending on the cellular state [24,25]. Evidence suggests that melatonin serves as a link between circadian rhythms and joint diseases, including RA and osteoarthritis (OA) [26,27]. For instance, oxidative stress induced by RA is reduced by melatonin and/or its metabolites, which not only neutralize the reactive oxygen (ROS) and reactive nitrogen species (RNS), but also upregulate levels of glutathione and antioxidant enzyme expression and activity [28,29]. In RA and OA, melatonin and its metabolites modulate several molecular signaling pathways including those governing inflammation, proliferation, and apoptosis [28,29]. A Role for Melatonin in Rheumatoid Arthritis Therapy? In investigations involving melatonin in animal models of inflammatory autoimmune diseases (multiple sclerosis, systemic lupus erythematosus, inflammatory bowel disease, and type 1 diabetes), melatonin has demonstrated beneficial effects in these diseases, including prophylactic and therapeutic effects in rats with adjuvant-induced arthritis (AA), in which melatonin dose-dependently repressed the inflammatory response and enhanced proliferation of thymocytes and secretion of IL-2 [30]. In addition, melatonin decreased the elevated level of cyclic 3 ,5 -AMP (cAMP) induced by forskolin. The drop in thymocyte proliferation induced by injection of Freund's complete adjuvant was highly correlated with a decrease in the levels of Met-enkephalin (Met-Enk) in the thymocytes, which were strikingly augmented by melatonin; this effect was blocked by the Ca 2+ channel antagonist, nifedipine. The anti-inflammatory and immunoregulatory actions of melatonin involved a G protein-adenyl cyclase-cAMP transmembrane signal and Met-Enk release by thymocytes [30]. Modulation of the Circadian Clock by Melatonin in RA Importantly, circadian rhythms exist in almost all cells of the body, and are regulated by circadian clock gene expression [7,15]. Any disruption in these circadian clocks is associated with the onset of inflammatory-related disease states and joint diseases, including RA [7,15]. Patients with RA exhibit abnormal clock gene expression, with disturbances in the hypothalamic-pituitary-adrenal axis influencing changes in circadian rhythms of circulating serum levels of melatonin, IL-6, cortisol and in chronic fatigue [15]. Melatonin exerts its effects in RA by modulating clock gene expression, including the Cry1 gene [7,15]. By attenuating the expression of the Cry1 gene, melatonin upregulates levels of cAMP production and increases activation of protein kinase A (PKA) and nuclear factor kappa B (NF-κB), which increases CIA severity in rats [31,32]. As detailed earlier, the diurnal secretion of melatonin is also closely related to the production of IL-12 and NO among RA synovial macrophages and human monocytic myeloid THP-1 cells [33]. A positive correlation has been reported between elevated early morning serum melatonin concentrations and disease activity scores as well as erythrocyte sedimentation rate (ESR) levels in patients with juvenile rheumatoid arthritis, although higher melatonin concentrations did not correlate with disease severity [34], echoing the findings of Forrest and colleagues reported earlier, who noted that elevated ESR and neopterin concentrations following melatonin treatment did not worsen the severity of RA disease [35]. El-Awady and colleagues suggested that melatonin may promote the activity of RA disease, rather than its severity [34]. However, as reported above, Akfhamizadeh and colleagues found no link between elevated morning serum melatonin concentrations and RA disease activity or other disease characteristics, despite also observing significantly higher melatonin values in newly diagnosed RA patients compared with those who had established RA disease [36]. There also appears to be a relationship between melatonin and the Bmal1 and ROR clock genes [15]. It is speculated that high melatonin concentrations in RA patients may modulate ROR activation [15]. ROR acts as a negative regulator of inflammation via the NF-κB signaling pathway and is essential in the activity of both melatonin and the clock gene Bmal1, which helps to maintain 24-h rhythms and regulate immune responses [15,37]. Moreover, ROR proteins bind into the promoter region and drive Bmal1 gene expression [38]. This activity at the binding site is inhibited by reverse-eritroblastosis viruses (REV-ERBs), which may contribute to Bmal1 suppression and exacerbation of RA [15]. Adverse Effects of Melatonin in RA Evidence suggests that melatonin is not beneficial in RA. For instance, the development of collagen-induced arthritis (CIA) in DBA/1 mice is exacerbated by constant darkness [39] and by daily exogenous administration of melatonin 1 mg/kg [40]. Hansson and colleagues then investigated the effects of surgical pinealectomy in DBA/1 and NFR/N mice with collagen-induced arthritis (CIA) [41]. Serum melatonin levels were reduced in the pinealectomized mice to around 30% of levels in normal or sham-operated controls [41]. In both mouse strains, pinealectomy was associated with a delay in onset of arthritic disease, less severe arthritis (lower clinical scores), and lower serum anti-CII levels compared with sham-operated animals [41]. The researchers interpreted these findings as showing that high physiological levels of melatonin stimulate the immune system and worsen CIA, while inhibiting the release of melatonin is beneficial [41]. Their speculation was supported by observations from mice subjected to 30 days of Bacillus Calmette-Guérin (BCG) inoculations into the left hind paw, inducing chronic granulomatous inflammation [42]. Higher vascular permeability was seen around the granulomatous lesions at midnight than at midday; this rhythmic variation was eliminated by pinealectomy and restored by nocturnal replacement of melatonin [42]. This ability of melatonin to modulate immune response was further illustrated by experiments in which the production of IL-12 and nitric oxide (NO) was significantly increased in the media of melatonin-stimulated RA synovial macrophages and cultured THP-1 cells compared with RPMI-treated synovial macrophage controls [33]. Unexpectedly, the opposite effects in IL-12 and NO levels were seen when RA synovial macrophages were pretreated with lipopolysaccharide (LPS) prior to melatonin, as compared with synovial macrophages treated with LPS alone [33]. This study explained the possible mechanism of joint morning stiffness in relation to diurnal rhythmicity of neuroendocrine pathways [33]. These conclusions are supported by later evidence from in vitro and in vivo studies, as well as clinical investigations, showing how melatonin stimulates the production of NO, T helper type 1 (Th1)-type and other inflammatory cytokines besides IL-12 (IL-1, IL-2, IL-4, IL-5, IL-6, TNF-α, granulocyte-macrophage colony-stimulating factor [GM-CSF], and transforming growth factor [TGF]-β, interferon [IFN]-γ), and enhances both cell-mediated and humoral responses [43][44][45]. In the early morning, patients with RA exhibit high serum levels of proinflammatory cytokines, especially TNF-α and IL-6, when melatonin serum concentrations are also higher [6,43]. The effects of these circadian rhythms are thought to promote the joint pain and morning stiffness that characterizes RA [6]. Animal studies have shown that melatonin treatment (10 mg/kg) dysregulates circadian clock genes, which may promote the progression of RA [31]. Intriguingly, a dual effect of melatonin as a proinflammatory agent and antioxidant has been observed in CIA rats [32]. In that study, a lower dosage of melatonin (30 µg/kg) increased anti-collagen antibodies, IL-1β, and IL-6 levels in the serum and joints of arthritic rats, worsening the severity of joint damage, while simultaneously lowering oxidative markers nitrite/nitrate and lipid peroxidation in serum, but not in joints [32]. Neutral or Beneficial Effects of Melatonin in RA Notably, a cross-sectional study from Iran has reported finding significantly higher morning serum levels of melatonin in patients with RA compared with healthy controls, but no correlation between melatonin and RA disease activity score or other disease characteristics, including age, disease duration, medications, gender, or season of sampling [36]. The study also reported finding higher serum melatonin values in newly diagnosed patients compared with patients with established RA, which needs further investigation [36]. Matrix metalloproteinases (MMPs) are a family of endopeptidases primarily responsible for catalyzing the degradation of the extracellular matrix (ECM) [46]. MMPs play important roles in RA. Elevated levels of circulating MMP-3, MMP-8 and MMP-9 are associated with disease progression in RA [47]. In particular, MMP-2 and MMP-9 are expressed in synoviocytes, CD34 + endothelial cells, monocytes and macrophages of rheumatoid synovium, indicating that both molecules are critical to pannus formation and invasion in RA progression [47]. Interestingly, melatonin reportedly directly inhibits secreted MMP-9 by binding to the active site and significantly reducing the catalytic activity of MMP-9 in both in vitro and cultured cells, in a dose-and time-dependent manner [46]. Thus, melatonin could have an important role in the prevention of joint destruction in RA. Research demonstrating that melatonin dose-dependently inhibits the proliferation of RA fibroblast-like synoviocytes (FLS) through the activation of the ERK/P21/P27 pathway suggests that inhibiting the invasion of RA FLS through cartilage and into bone may have important implications in the treatment of RA [48]. Blocking NF-κB signaling appears to be the way in which melatonin protects cells from oxidative stress [28] and largely explains how melatonin suppresses proinflammatory cytokines such as IL-1β and TNF-α [20]. Other pathways and molecules associated with inflammation that are modulated by melatonin include the mitogen-activated protein kinase (MAPK) and nuclear erythroid 2-related factor 2 (Nrf2) pathways, as well as Toll-like receptors [15]. In a clinical trial involving RA patients, six months of melatonin treatment (10 mg/day) was associated with a general decrease from baseline in concentrations of peroxidation markers [35]. Conversely, ESR and neopterin concentrations were increased from baseline with melatonin and significantly higher at six months than concentrations in the placebo-treated cohort, which experienced a significant downward trend in these inflammatory indicators during the trial [35]. Paradoxically, neither the elevations in ESR and neopterin concentrations nor the decrease in tissue peroxidation associated with melatonin translated into significant differences from the placebo group in terms of patients' symptoms, or in the concentrations of proinflammatory cytokines (TNF-α, IL-1β, and IL-6) [35]. The study researchers concluded that melatonin does not appear to be beneficial in RA [35]. The findings of Forrest et al. are discussed by Maestroni et al., who argue that because melatonin enhances the production of Th1-type and inflammatory cytokines in RA, upregulates cell-mediated and humoral responses, and also exacerbates CIA in mice, melatonin likely promotes RA disease and is inappropriate for therapeutic use [49]. Maestroni et al. also emphasized the high blood melatonin concentrations (280 pg/mL) observed 12 h after dosing in the RA cohort [49], in relation to the notion that higher blood melatonin concentrations, especially in the early morning, may be responsible for morning stiffness and joint swelling experienced by patients with arthritis [6,43]. Maestroni et al. conjecture that autoreactive T cells in RA patients synthesize and release melatonin, thereby worsening the disease process [49]. Nevertheless, Korkmaz [50] has defended Forrest et al., pointing out that melatonin has shown strong anti-inflammatory activity in studies using known inflammatory agents, such as zymosan [51], lipopolysaccharide [52,53], and carrageenan [54]. Korkmaz speculated that the high blood melatonin concentrations in the RA cohort may have been a compensatory response to RA inflammation and were comparable to the high levels of melatonin in cerebrospinal fluid and its metabolites in meningitis populations and in pediatric patients with epilepsy [50]. Korkmaz maintained that melatonin is an appropriate adjunctive therapy for RA [50]. Melatonin appears to play an important role in microRNA (miRNA) expression in RA. miRNAs are small, non-coding RNAs that post-transcriptionally mediate protein expression by targeting protein-coding genes implicated in cancer cell proliferation, differentiation, apoptosis, and migration [55]. A recent study found that melatonin appears to inhibit miR-590-3p expression and induce apoptosis in human osteoblasts [55]. In another study, melatonin treatment effectively downregulated TNF-α and IL-1β production in human RA synovial fibroblasts (the MH7A cell line) by suppressing PI3K/AKT, ERK, and NF-κB signaling and upregulating miR-3150a-3p expression [20]. Those investigations confirmed that the MT 1 receptor mediates the anti-inflammatory effects of melatonin and that melatonin not only inhibits inflammatory cytokine release in mice with CIA-induced arthritis, but also attenuates CIA-induced cartilage degradation and bone erosion [20]. This evidence suggests that melatonin targets miRNAs, which could be explored in clinical trials examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. examining the efficacy of melatonin in the treatment of RA. The following figure ( Figure 1) and table (Table 1) illustrate the processes through which melatonin exerts its therapeutic effects. Summary Various research suggests that melatonin has disease-promoting effects in RA and that it could increase the severity of RA, in contradiction to the beneficial effects of melatonin in other autoimmune inflammatory diseases. However, in the past decade, some studies have demonstrated that melatonin can alleviate RA through the inhibition of RA synovial fibroblast proliferation, TNF-α and IL-1β expression, as well as MMP-9 activity. The anti-inflammatory character of melatonin in RA is associated with regulation of microRNAs (such as miR-3150a-3p). More investigations are therefore warranted to explore the possible double-edged effects of melatonin in RA. The use of melatonin in patients with RA needs thorough consideration by clinical physicians. Conflicts of Interest: The authors declare no conflict of interest.
2020-04-23T09:14:40.074Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "daadf6d141dbac6be9ad6dfc0f4264cdb6e7f7d9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/8/2877/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27121bc6b6d5a24d4e5f9ff1a768181baf08c526", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24638286
pes2o/s2orc
v3-fos-license
MR imaging features of benign retroperitoneal paragangliomas and schwannomas Background To determine whether MRI feature analysis can differentiate benign retroperitoneal extra-adrenal paragangliomas and schwannomas. Methods The MRI features of 50 patients with confirmed benign retroperitoneal extra-adrenal paragangliomas and schwannomas were retrospectively reviewed by two radiologists blinded to the histopathologic diagnosis. These features were compared between two types of tumours by use of the Mann-Whitney test and binary logistic regression. The patients’ clinical characteristics were reviewed. Results Analysis of MRI images from 50 patients revealed no significant differences in the quantitative MRI features of lesion size, ratio of diameter and apparent diffusion coefficient. There were significant differences in the qualitative MRI features of location, necrosis, cysts and degree of tumour enhancement for two readers, with no significant differences in the other qualitative MRI features between these tumours. The combination of necrosis with degree of tumour enhancement during the arterial phase increased the probability that a retroperitoneal mass would represent retroperitoneal extra-adrenal paraganglioma as opposed to schwannoma. Conclusion We have presented the largest series of MRI features of both benign retroperitoneal extra-adrenal paragangliomas and schwannomas. Some MRI features assist in the differentiation between these tumours, with imaging features consisting of necrosis and avid enhancement during the arterial phase, suggestive of retroperitoneal extra-adrenal paragangliomas. Background Primary neurogenic tumours, which constitute 10% to 20% of primary retroperitoneal tumours, occur in a younger age group and are usually benign [1]. These tumours can be classified as ganglion cell origin, paraganglionic system origin (pheochromocytomas and paragangliomas), or nerve sheath origin (neurilemmomas, neurofibromas, neurofibromatosis, and malignant nerve sheath tumours). Schwannomas and extra-adrenal pheochromocytomas are the most common benign soft-tissue tumours occurring in the retroperitoneum [2,3]; extra-adrenal pheochromocytomas account for 1-3%, and schwannomas account for 4% of retroperitoneal tumours [4,5]. Retroperitoneal paragangliomas can be divided into functional and non-functional paragangliomas, and functional tumours are often associated with hypertension, tachycardia, headache and diaphoresis [6]. However, non-functional paragangliomas can be completely clinically silent (they are referred to as "incidentalomas"). Since retroperitoneal benign schwannomas are usually asymptomatic, diagnostic difficulties are often encountered for differentiating retroperitoneal paragangliomas from schwannomas due to their nonspecific clinical and imaging features. However, any physical contact with these silent paragangliomas can precipitate cardiac arrhythmias and malignant hypertension [7]. Now, more than ever, urologists and radiologists should understand the imaging appearances of paragangliomas, as differential diagnosis between them is clinically essential in decision making on a therapeutic strategy. To the best of our knowledge, description of the MRI features of retroperitoneal extra-adrenal paragangliomas and schwannomas has been reported on small sample sizes in the literature, and no joint assessment has been performed for the imaging appearances of these two similar entities, which may be expected to have overlapping imaging findings in view of their common pathologic characteristics. Therefore, the purpose of our study was to retrospectively analyse the MRI imaging differences of benign retroperitoneal extra-adrenal paragangliomas and schwannomas, particularly for those with non-functional paragangliomas. Patients This retrospective study was approved by our institutional review board with waiver of informed consent due to the retrospective nature. Between July 2008 and February 2016, retroperitoneal extra-adrenal paragangliomas and schwannomas were identified within the radiology databases and were confirmed by surgical resection and pathological findings in 50 patients who had undergone preoperative MRI. Of these 50 patients, 20 had undergone preoperative CT (13 paraganglioma and 7 schwannoma patients). All patients' medical histories were reviewed. In all, 50 patients (30 men and 20 women, mean age: 44.3 ± 12.1 years, age range: 17-79 years old) had a total of 50 tumours: 24 benign retroperitoneal extra-adrenal paragangliomas and 26 benign retroperitoneal schwannomas. Magnetic resonance imaging protocol MRI examinations were performed with a 1.5-T system (n = 5, Signa HDXT, GE Healthcare), 3.0-T system (n = 28, Signa EXCITE; GE Healthcare, Milwaukee, WI, USA) and a 3.0-T system (n = 17, Discovery 750, GE Healthcare, Milwaukee, WI, USA). A surface phased-array coil was used, with all patients in the supine position. Respiratorytriggered transverse and coronal T2-weighted fast spinecho sequences were initially performed, followed by transverse T1-weighted dual-echo in-phase and out-ofphase sequences and three-dimensional fat-saturated T1weighted dynamic contrast-enhanced sequences performed during suspended respiration. Transverse breath-hold diffusion-weighted imaging (DWI) was obtained using a single-shot, spin-echo echo-planar sequence prior to the administration of contrast material with tri-directional gradients and two sets of b values: 0 and 800 s/mm 2 . A 15-mL bolus of contrast medium (Gadobenate dimeglumine, MultiHance; Bracco Sine, Shanghai, China) was injected intravenously at a flow rate of 2 mL/s using a power injector (Spectris; MedRad, Warrendale, PA, USA), followed by a 20-mL saline flush. Dynamic contrastenhanced MRI (DCE-MRI) was performed in the transverse plane at baseline (precontrast) and during the arterial, venous and delayed phases. Imaging features analysis Two radiologists who had 10 and 5 years of experience in the interpretation of abdominal MR images independently reviewed all images. Readers who were blinded to the histologic diagnoses of the lesions evaluated and recorded each lesion for the presence of each of the following features [8,9]. Quantitative MR image feature analysis (A) Tumour maximum size: the maximum size of each lesion was measured at its single largest diameter in three planes. (B) Ratio of diameter: two readers independently measured the maximum transverse diameter (TD) and longitudinal diameter (LD) of the mass in the coronal section. The means of maximum TD and LD of the tumour in the coronal section were recorded. Each measurement was conducted three times, with the mean value used as the final value to avoid intra-and inter-observer disagreements. Ratios of diameter were determined by the following equations: Ratio of diameter = Mean TD/ Mean LD. (C) Apparent diffusion coefficient (ADC) values: ADC maps were auto-generated. The region of interest (ROI) was placed corresponding to the most obvious enhancing region of the retroperitoneal tumours in the arterial phase according to visual assessment, with the aim of avoiding necrosis, haemorrhage and cystic changes (which are defined as those parts of the lesions showing no enhancement on DCE-MR images) and was performed using an Advantage Workstation (Advantage Workstation, version 4.6, GE Healthcare, Bue, France). In each patient and for each tumour, three ROIs, each measuring 30-60 mm 2 , were drawn on three target anatomical structures. The mean ADC values in ROIs on three targets were calculated for each patient. Qualitative MR image feature analysis (1) Position and peripheral location: The lesion was assessed according to whether it was located in the right paravertebral region or the left paravertebral region (near the spinal column and psoas muscle, adrenal region or kidney); anterior to the vertebrae (close to the abdominal aorta and the inferior vena cava, the origin of the inferior mesenteric artery or peripancreatic location); or in the pelvic cavity. (2) Shape: Tumours were round or oval/irregular. (3) Margin: Tumours were well-defined or partly poorly defined. (4) Microscopic lipid: There was an area of signal loss in the lesion on out-ofphase T1-weighted images. (5) Subacute haemorrhage: There were areas of increased T1 signal intensity on unenhanced fat suppressed T1-weighted images. (6) Cystic degeneration: Represented by areas with signal intensities (SIs) equal to that of cerebrospinal fluid on T2-weighted images; low SI on T1-weighted images; lack of enhancement; and lobulated morphology. (7) Necrosis: Represented by high SI on T2-weighted images, although not as high as the signal of cerebrospinal fluid; low SI on T1-weighted images; lack of enhancement; and central location within the tumour. (8) Degree of tumour enhancement: Subjective assessments of the MR imaging degree of mass enhancement compared with that of the renal cortex (avid enhancement, moderate enhancement or slight enhancement) were performed on Gd-enhanced MR images acquired during the arterial phase. (9) Other MR imaging features: Calcification, smooth expansion of a sacral nerve root exit foramen, fluid-fluid level, and subjective assessment of the MR imaging degree and pattern of mass enhancement during the venous and delayed phases. Pathologic diagnosis All specimens were retrospectively examined by two uropathologists who were unaware of the MRI findings (10 years of experience in uropathology) in consensus. Statistical analysis Continuous variables are expressed as the means ± SD and were analysed by independent t-tests for normally distributed data or Mann-Whitney tests for non-normally distributed data. For the qualitative variables, the Chi square test was used to compare the sample proportions of the two groups. Generalized estimating equations based on a binary logistic regression model were used to determine whether lesion type (retroperitoneal extra-adrenal paragangliomas or schwannomas) was associated with any of the individual binary factors. In this context, stepwise variable selection was performed to determine whether the combination of two or more of the aforementioned imaging features represented a significant independent predictor of schwannoma or paraganglioma. These tests were performed separately for each reader. Kappa coefficients were not used for this determination because the very high prevalence rates of certain imaging features for many of the binary factors was expected to produce misleadingly low values [8,10]. All reported p values are two-sided and considered statistically significant when less than 0.05. SPSS version 19.0 software (IBM Corporation, Armonk, NY, USA) was used for all computations. Demographic data and clinical characteristics In this study, a total of 50 lesions in 50 patients were identified for inclusion in the analysis, consisting of 24 retroperitoneal extra-adrenal paragangliomas (Fig. 1) and 26 retroperitoneal schwannomas (Fig. 2). All presenting clinical characteristics of these patients are summarized in Table 1. A total of 30 tumours were incidentally found in 50 patients. The 24-h urinary vanilmandelic acid and urinary catecholamine concentrations were measured for 22 of the patients, of which 9 were positive. No patient had a medical history of neurofibromatosis. The tumours were fully excised in all cases, with clear resection margins. The final histologic diagnosis was obtained by laparoscopic surgery (15 lesions), robot-assisted laparoscopy (11 lesions), and laparotomy (24 lesions). Findings of the quantitative MR imaging features The presenting quantitative MR imaging characteristics of mean maximum lesion size, ratio of diameter and ADC values of the 24 retroperitoneal extra-adrenal paragangliomas and 26 schwannomas are summarized in Table 2, with no significant differences in the assessments by two reviewers. Of the 50 lesions, the maximum diameter was greater than 5 cm in 34 cases (17 retroperitoneal extraadrenal paragangliomas and 17 schwannomas), while in the remaining 16 cases, the maximum diameter was less than 5 cm (7 paragangliomas and 9 schwannomas). Findings of the qualitative MR imaging features (1) There were statistically significant differences in terms of lesion location, necrosis, cystic degeneration and degree of tumour enhancement for both readers (p = 0.000-0.011, 0.000-0.019 for readers 1 and 2, respectively), while there were no statistically significant differences between retroperitoneal extra-adrenal paragangliomas and schwannomas in terms of shape, boundaries, microscopic fat, and subacute haemorrhage findings for both readers (p = 0.164-0.589, 0.271-1.0 for readers 1 and 2, respectively) ( Table 3). (2) Concordance of the two readers for each of the assessed binary features was good to excellent, ranging from 52% to 98% for all features (Table 4). (3) Only two features remained statistically significant in the stepwise multivariate logistic regression model: necrosis and degree of tumour enhancement. The combination of necrosis with degree of tumour enhancement during the arterial phase increased the probability that a retroperitoneal mass would represent extra-adrenal paraganglioma versus schwannoma, with diagnostic accuracies (c statistic or area under the curve-AUC) of 0.893 (reader 1) and 0.853 (reader 2)and with 95% confidence intervals (CIs) of 0.807-0.978 for reader 1 and 0.748-0.9579 for reader 2 (Table 5). (4) No retroperitoneal extra-adrenal paragangliomas were identified in the pelvis, though 4 schwannomas were identified in the pelvis, with 1 case located in the medial iliac arterial bifurcation, 2 cases in the wall of the basin, and another in the anterior sacral space. Approximately 54.17% of the paragangliomas were located in the prevertebral region, which is close to the aorta and the inferior vena cava. However, 50% of schwannomas were in the right paravertebral region (2 cases in the renal portal area, 10 cases in lumbar major muscles, and 1 case in the adrenal region). , the tumour appears with high signal intensity compared with the gluteal muscles. The intratumoural cystic areas are higher in signal intensity. b On axial DWI, the tumour appears circular and nodular, with high signal intensity. c, (d) and (e) In T1WI (in-phase, out-of-phase and pre-scanned imaging), part of the tumour signal is slightly lower than that of the gluteal muscle, and most of it is slightly higher than that of the gluteal muscle. f On contrast T1WI during the aerial phase, the tumour exhibits obvious and inhomogeneous enhancement. g On coronal T1WI during the delay phase, the tumour exhibits obvious and inhomogeneousenhancement. h On ADC imaging, the mean ADC value of the ROI of the tumour is 0.00119 mm 2 /s Fig. 2 An asymptomaticpatient (age: 30-39) with a histologically proven benign retroperitoneal schwannoma. The transverse and longitudinal diameters of the tumour are 2.19 cm and 3.51 cm, respectively. a On axial T2-weighted imaging, the tumour presents with heterogeneous signal intensity compared with the gluteal muscles. b On axial DWI, the tumour appears circular, with high signal intensity. c, (d) On T1-weighted imaging (in-phase, out-of-phase imaging), the tumour's signal is slightly lower and isointense compared with that of the gluteal muscle. The intratumoural microscopic fat areas are slightly lower in signal intensity on out-of-phase imaging. e On T1-weighted imaging (pre-scanned imaging), the tumour's signal is slightly lower, with isointense and hyperintense spots compared with that of gluteal muscle. The intratumoural subacute haemorrhage areas present as hyperintense spots in signal intensity on pre-scanned imaging. f On contrast T1-weighted imaging during the aerial phase, the tumour presents as slightly enhanced and inhomogeneous. g On coronal T1-weighted imaging during the delay phase, the tumour presents as moderately enhanced and inhomogeneous. h On ADC imaging, the mean ADC value of the ROI of the tumour is 0.00171 mm 2 /s Findings of other MR imaging features First, only small sub-centimetre flecks of calcification were seen in 2 schwannomas and 1 paraganglioma. Second, only two schwannomas were shown to demonstrate smooth expansion of a sacral nerve root exit foramen without including bony destruction of the sacrum. Third, all tumours showed inhomogeneous enhancement following gadolinium administration, with a nonenhancing component showing a fluid signal or necrotic component and peripheral enhancement of the solid elements. Only one paraganglioma had a fluid-fluid level inside the lesion. The degrees of enhancement of 21 paragangliomas and 26 schwannomas showed continuous signal increases in mass in the venous and delayed phases (the persistent pattern), and only 3 paragangliomas showed "washout" of signal intensity. Discussion Retroperitoneal extra-adrenal paragangliomas and schwannomas confined to the retroperitoneum are frequently encountered in clinical practice, and diagnostic difficulties are often encountered [11]. To the best of our knowledge, this study represents the largest series to date to describe the MRI features of both tumours, with the aim of differentiating them due to overlapping clinical, MR imaging and histologic features. Paragangliomas are usually described on MRI as masses having characteristic high-signal intensity or a light bulb bright signal on T2WI with the use of fat suppression [12], which is used to differentiate them from other tumours, but further studies have proposed that this feature is neither specific nor sensitive and have indicated that the use of this sign leads to the misdiagnosis of paragangliomas in up to 35% of cases [13,14]. The MRI characteristics of benign retroperitoneal schwannomas include hypointensity on T1WI and hyperintensity on T2WI [15,16], and neither is specific. However, in our results, the MRI features consisting of tumour location, necrosis and tumour enhancement showed significant differences between these two retroperitoneal tumours. More than 50% of paragangliomas were situated in the prevertebral region close to the inferior vena cava and aorta following the aorto-sympathetic chain, results that were consistent with other reports [17]. However, schwannomas were usually located in the paravertebral region and, less commonly, adjacent to the kidney, pre-sacral space, and abdominal wall [6]. Although the appearances overlapped with the reported appearances of many retroperitoneal tumours [18],no statistically significant differences were found in the stepwise multivariate logistic regression model in our study. One obvious specific feature of 3 schwannomas was smooth expansion of a nerve root exit foramen without showing bony destruction, which is highly suggestive of retroperitoneal schwannomas [19]. DCE-MRI has also been employed for tumour detection and characterization. In this study, 80% of retroperitoneal extra-adrenal paragangliomas exhibited strong initial signal increases during the arterial phase; however, 76.92% of schwannomas mostly demonstrated slow initial signal enhancement. These results are consistent with other studies [20,21]. It should be noted that a pattern of continuous Resection of tumor Laparoscopic surgery (n) 9 6 -Robot-assisted laparoscopic (n) 10 1 - Values are mean values ± standard deviations. P values were calculated by using t test Other data are numbers of patients (n). P values were calculated by using x 2 test. CA-related symptoms catecholamine-related symptoms, VMA Vanilmandelic acid, CA Catecholamine, MIBG Metaiodobenzylguanidine scintigraphy signal increase of masses was common in these tumours in the venous and delayed phases and was not helpful to distinguish paragangliomas from schwannomas. Necrotic change was noted in more than 70% of retroperitoneal extra-adrenal paragangliomas but in only 34.62% of schwannomas in our study. Necrotic changes tend to occur as paragangliomas increase in size [18]. Further research showed that a combination of avid enhancement with necrosis provided diagnostic accuracies of 0.853 and 0.893 for the diagnosis of retroperitoneal extra-adrenal paragangliomas in our series. In other words, these findings allowed the differentiation of paragangliomas from schwannomas: avid enhancement and necrosis were predictive of paragangliomas, while slight enhancement was correlated with schwannomas. Paragangliomas are characteristically highly vascular neoplasms and an abundant capillarynetwork,and may have precarious microcirculation because of high levels of tissue vasoconstrictor substances. These histologic features can cause spontaneous massive intratumoural haemorrhage and necrotic degeneration, resulting in the formation of a pseudocyst, and exhibits marked and early enhancement in DCE-MRI [22,23]. Further, Sahdev et al. reported that necrotic changes were observed in more than 70% of retroperitoneal extra-adrenal paragangliomas and tended to occur as paragangliomas increased in size [18]. However, many theories attempt to explain the degeneration cystic schwannoma. One theory involves the degeneration of Antoni B areas leading to cyst formation, while progressing in size; another theory holds that with increasing tumor central ischemic necrosis occurs that causes cysts within the tumor [24,25]. On histopathological examination, Antoni B area is a myxoid component. No significant differences were shown between the mean lesion sizes and mean ADC values of these two types of tumours in our study. This study suggested that the ADC quantitative assessment could not provide significant value for the differential diagnosis of both of the tumours. We found that the mean ADC values of these tumours were greater than the mean ADC values of neck paragangliomas and schwannomas [26,27]. Evidence of degeneration, which includes cysts, subacute haemorrhage and microscopic fat, was common for retroperitoneal extra-adrenal paragangliomas and schwannomas, and all tended to be observed [28]. Takatera et al. [19] reported that 66% retroperitoneal schwannomas showed cystic degeneration, while schwannomas demonstrated an exceptional risk of degeneration [11,29]. In this study, more than 90% of retroperitoneal schwannomas showed the same feature, which was only found in 62.5% of retroperitoneal extra-adrenal paragangliomas, highlighting that fact that this feature is helpful in the differential diagnosis. Literature studies have reported that haemorrhagic portions can be seen in paragangliomas and schwannomas [30,16]; however, they did not report that the feature was able to distinguish these tumours. The feature of subacute haemorrhage was found in 50% of retroperitoneal extra-adrenal paragangliomas and 34.62% of schwannomas, and only one case showed a fluid-fluid sign in paragangliomas. In addition, microscopic fat was not reported in the literature, and there was no obvious specificity. Calcifications can occur in all types of neurogenic tumours. In our study, calcifications were seen in only 3 lesions, without showing any obvious specificity. However, some authors have reported that punctate calcifications can be seen in retroperitoneum extra-adrenal paragangliomas, along with punctuate or curvilinear calcifications along the walls of masses in schwannomas [31]. There are several limitations in our study. First, this was a retrospective study with a relatively small sample size for benign retroperitoneal extra-adrenal paragangliomas and schwannomas, reflecting the low incidence rates of these tumours. The imaging features of our patients were similar to those described in other radiological series, and the small number of cases reflected the rarity of the tumours. Second, due to the study's retrospective nature, two different field strengths of magnetic resonance scanners were used. Although we demonstrated that field strength had no effect on ADC measurements of renal tumours between 1.5 T and 3.0 T, we did not include many retroperitoneal tumours. In addition, ADCs of various kinds of retroperitoneal lesions should be compared between 1.5 T and 3.0 T. Third, lesions presented with predominantly cystic changes, haemorrhage and necrosis, which may affect ADC values or signal intensity measurements. Finally, there was only one malignant schwannoma and no malignant paragangliomas. We excluded these tumours from the analysis and did not perform any further studies of benign and malignant retroperitoneal tumours. Conclusions In summary, in this study, we present the largest series of radiological studies of benign retroperitoneal extraadrenal paragangliomas and schwannomas. These tumours are often found incidentally or present with vague and nonspecific symptoms. They are rare retroperitoneal neoplasms, usually presenting as large ovoid or spherical masses with smooth, well-defined borders and do not invade or obstruct adjacent structures. The combination of avid enhancement with necrosis, clinical CA-related symptoms, positive VMA/24 h, positive CA/24 h and positive 131I-MIBG provided diagnostic accuracy for the diagnosis of retroperitoneal extra-adrenal paragangliomas in our series. When these features are correctly recognized, there should be a high level of suspicion for paragangliomas.
2018-01-10T06:45:28.070Z
2018-01-04T00:00:00.000
{ "year": 2018, "sha1": "1191c67e55f986060642777fb3c4478eeb02817f", "oa_license": "CCBY", "oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/s12883-017-0998-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1191c67e55f986060642777fb3c4478eeb02817f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220827585
pes2o/s2orc
v3-fos-license
Modelling performance an air transport network operated by subsonic and supersonic aircraft ABSTRACT This paper deals with modelling the performance of an air transport network operated by existing subsonic and the prospective supersonic commercial aircraft. Analytical models of indicators of the infrastructural, technical/technological, operational, economic, environmental, and social performance of the network relevant for the main actors/stakeholders involved are developed. The models are applied to the given long-haul air route network exclusively operated by subsonic and supersonic aircraft according to the specified “what-if” scenarios. The results from application of the models indicate that supersonic flights powered by LH2 (Liquid Hydrogen) could be more feasible than their subsonic counterparts powered by Jet A fuel, in terms of about three times higher technical productivity, 46% smaller size of the required fleet given the frequency of a single flight per day, 20% lower sum of the aircraft/airline operational, air passenger time, and considered external costs, up to two times higher overall social-economic feasibility, and 94% greater savings in contribution to global warming and climate change. These flights could be less feasible in terms of about 70-85% higher aircraft/airline operational costs, 70% and 19% higher fuel consumption and emissions of Green House Gases, respectively, and 6-13% higher noise compared to the specified acceptable levels. initial aircraft mass at the beginning of cruising phase (kg) m de initial aircraft mass during descending phase (kg) N Newton nm nautical mile p passenger P aircraft price including capital maintenance costs ($US/aircraft) q dynamic pressure (N/m 2 ) R route length (nm, km) RV aircraft residual value (%) r GDP average GDP generated by commercial air transportation ($US/p-km) r fc cumulative rate of improvements in aircraft fuel efficiency r distance where thrust force (T) is applied (m, km, nm); R/C rate of climb of aircraft (ft/min) R/D rate of descent of aircraft (ft/min) R 1 , R 2 length of route in direction (1) and (2) INTRODUCTION Increasing of travel speed has been a human endeavour for a long time. In general, limitations on the time and monetary budget, maximizing travel distances during the shortest possible time and related costs have become the main driving forces in developing both inland HS (high speed) and air transport systems. A relatively simple calculation indicates that an increase in the operating speed generally brings marginal savings in the user/passenger travel time. These savings increase with increasing of the non-stop travel distance(s) (1) . Under such conditions, the possible implementation of commercial supersonic aircraft seems to be beneficial primarily in the case of long-haul flights. Currently, these flights are carried out by commercial subsonic aircraft. Additionally, combined with the aircraft seat capacity and the flight frequency, the supersonic speed could substantialyl increase the air route(s) technical productivity and thus bring obvious gains to the airlines. However, these gains in travel time and technical productivity remain questionable after being counterbalanced by the overall economics of these flights, including their operational costs and the environmental and social externalities. The earliest but retired Concorde and TY 144 supersonic aircraft did not achieve such an acceptable social-economic balance. Therefore, the question is whether the design and operational concepts of forthcoming supersonic aircraft, combined with an innovative consideration, could possibly indicate elements of their positive social-economic feasibility? This paper provides a framework for assessing this balance through modelling performance indicators of the given long-haul air route network, operated exclusively by supersonic aircraft or their subsonic counterparts, according to the given "what-if" scenarios. Modelling implies development of the analytical models of particular indicators. The considered performances are infrastructural, technical/technological, operational, economic, environmental, and social. Infrastructural performance relates to airports as network nodes and the air routes connecting them as the network links. The airports and air routes can accommodate both subsonic and supersonic aircraft safely, efficiently, and effectively. Technical/technological performance is considered as directly related to the design of both considered categories of aircraft. These are: length (m), wing aspect ratio, L/D (lift-to-drag) ratio, take-off weight (tons), number of engines (-), take-off speed (kt), cruising speed (Mach), landing speed (kt), range (nm; km), payload (seats), payload/weight ratio (-), fuel/weight ratio (-), and payload/fuel ratio (-) (13) . In the further consideration, these are exclusively used as the given case-based parameters. Operational performance relates to the passenger demand and the supply of transport capacities serving this demand, under the given condition. In general, these are the flight frequency by the given aircraft type(s) and load factor, the air route travel time, the aircraft turnaround time, the route and network transport work and technical productivity, and the fleet size (13) . Economic performance is considered to be the flight(s) operating cost (as the basis for setting up airfares), the cost of passenger time while on board the flight(s), contribution to the regional/national gross domestic product (GDP), and the overall social-economic feasibility. Environmental performance includes the aircraft/flight fuel consumption and related emissions of greenhouse gases (GHG), their costs, i.e. externalities, and the contribution of GHG emissions to global warming and climate change. Social performance relates to the aircraft noise around airports and along the network routes (the latter by supersonic aircraft/flights), congestion/delays, safety, i.e. the risk of incidents/accidents, and their corresponding costs, i.e. externalities. These performances are considered relevant for the particular actors/stakeholders involved. These can be the aerospace manufacturers, airlines, airports, users/air passengers, local communities, and aviation and non-aviation regulatory bodies and policy makers at the local, regional, national, or international level. In addition to this introductory section, the paper consists of five other sections. Section 2 presents an overview of research and development of commercial subsonic and supersonic aircraft. Section 3 deals with development of the analytical models of performance indicators of the given long-haul air route network. This consists of airports as network nodes and air routes with the non-stop flights connecting them as the network links. Section 4 shows an application of the models of performance indicators to an existing long-haul air route network. This network consists of 25 longest air routes where the current subsonic flights are assumed to be completely replaced by their supersonic counterparts in the year 2050. The last Section (5) summarises some conclusions. Commercial subsonic aircraft The commercial subsonic aircraft considered in the given context have been typically characterised by their range (R), seat capacity (S), and cruising speed (v As can be seen, technical productivity (TP), (operational) cost (C), and fuel consumption (FC) have been strongly driven by the aircraft capacity (S), cruising speed (v), and range (R). Aircraft capacity (S) has been the strongest driving force in all three relationships. In addition to fuel consumption (FC), it has also indirectly been the strongest driver of the corresponding GHG emissions. Noise generated by commercial aircraft around airports, as one of the conditions for their certification, has been regulated at the local, national, and international level. Currently, all commercial aircraft meet the specified noise limits (5) . As far as safety is concerned, the accident rate of commercial aircraft has generally been decreasing over time; for example, from 0.55/10 6 flights in 1998 to 0.03/10 6 flights in 2017. Specifically, this rate with fatalities on the long-haul flights, carried out by the B777s and A330s aircraft between 1959 and 2016 was 0.20/10 6 flights and 0.21/10 6 flights, respectively. The most recent B787s, A350, and A380 have been without accidents with fatalities (6),(7) . Past developments Past development of commercial supersonic 1 aircraft had materialised in the commercialisation of the two aircraft -the French/British Aerospatiale/BAC Concorde and the Soviet Union's TY 144. The Concorde entered commercial service in 1976 and retired in 2003. The TY 144 entered the commercial service in 1977 and retired in 1978 (8)-(11) . Both aircraft had a similar design, technical productivity, operating costs, fuel consumption, related GHG emissions, and noise. However, regarding the latest four above-mentioned features, they had not fulfilled expectations of the main actors/stakeholders. The airfares had been based on the high operating costs, mainly influenced by the high fuel consumption. This had been additionally compromised by constraining the overland operations at the supersonic speed (M > 1) aimed at mitigating and/or avoiding excessive noise from the sonic boom(s). Under such circumstances, these aircraft had been inferior to their slower subsonic counterparts, the B 707 and the B 747, regarding the operational costs and consequently airfares (12)- (14) . Additionally, the accidents (crashes) of the Concorde (25 Jul 2000, Paris) and the TY 144 (3 June 1973, Paris Air Show) raised some concerns about their safety, which consequently speeded up their retirement (15) . Past and current research and development After the retirement of the Concorde and the TY144, research and development of supersonic aircraft continued. One of the earliest efforts dealt with identifying the relevant research topics regarding the design, economic efficiency, and safety of these aircraft. This was initially elaborated by improving the existing and developing some innovative research techniques (27) . This was followed by elaborating the concepts of commercial supersonic transport aircraft in terms of identifying new research opportunities regarding critical technologies and areas needing continuous development. These included the airframe design, control systems, engines, and materials, as well as the issue of reducing the sonic boom, fuel consumption and GHG emission, improving in-flight safety, and the certification requirements (28,29) . Further research summarized the developments of the concepts of supersonic aircraft over the past 30 years, from the engineering, economic, and safety/environmental/social perspective (30) . This was certification by research on the possible introduction of LH 2 as fuel for commercial air transportation. It elaborated the necessary conditions for smooth transition from conventional (Jet A) to new (LH 2 ) fuel, including the necessary modifications of the aircraft design. This resulted in the development of the concept of both subsonic and supersonic aircraft powered by LH 2 (20), (31) . In particular, comprehensive research on developing the concept of a large supersonic aircraft, including the overall long-term aspects related to the high-speed transport, was carried out. For example, two EC (European Commission) projects, LAPCAT and ATLLAS, developed the methodology for aircraft design including optimal integration of their airframe, engines, and materials. Additionally, some dedicated experiments were carried out to evaluate the overall feasibility of the proposed design under different operating conditions (24), (25) . RAMJET or SCRAMJET engines were also specifically explored as propulsion systems for supersonic aircraft, including the challenges in their development, technical/technological and operational feasibility (32), (33) . As far as the most recent endeavours of the airspace industry are concerned, Boeing has announced the development of a hypersonic aircraft with the cruising speed of about M = 5. It is thought that it will be operational by the late 2030s (34) . Additionally, three U.S.-based startup companies, Aerion, Spike, and Boom, are developing the new supersonic aircraft expected to be operational in the mid-2020s (see Table AI-1) (22), (23), (35) . The operational compatibility of the future supersonic aircraft has also been under scrutiny, with regard to the current and future ATC (air traffic control) system, and the corresponding flight operational rules and procedures. In particular, the prospective benefits and barriers to integrating these aircraft seamlessly in the U.S. NAS (National Airspace System) have been considered (12), (21), (39) . Several research efforts have recently focused on the assessment of the potential market for the supersonic flights. One deals with the development of the air transport market, including its long-haul segment where the supersonic aircraft would most likely operate (36) . The other deals directly with the estimation of the global market potential for supersonic transportation by evaluating worldwide data on the premium ticket sales (37) . One of the main concerns in developing the new concepts of supersonic aircraft has continued to be their economic efficiency. This has expected to be mainly influenced by the costs of substantial fuel consumption. The airfares based on such costs could make them eventually attractive primarily for business passengers with the presumably high value of their time. However, some research has indicated that if these aircraft were large and powered by LH 2 fuel charged at the reasonable prices, their costs and related airfares would be quite competitive to those of their current subsonic counterparts (24) . Since the environmental and social impacts of supersonic aircraft powered by Jet-A fuel (kerosene) have expected to be substantial, the corresponding GHG emissions and their noise have been also under scrutiny. Some research has reviewed the environmental issues and challenges of relevance to the design of supersonic business jets. Due to the inherent interrelation of the above-mentioned performance, a multidisciplinary design, analysis and optimisation have been considered as necessary for creating "low-boom" "low-drag" supersonic aircraft (38) . Additionally, a preliminary assessment of noise and GHG emissions by supersonic aircraft has indicated that their most likely design would not enable fulfilment of the current (2018/19) global standards for GHG emission of and local-airport noise. Particularly, according to the operating scenarios on the selected routes, their average fuel consumption and related GNG emissions (per passenger) would exceed those of their current subsonic counterparts several fold (18) . Also, the regulation of operations of the new supersonic aircraft, which is already underway, is primarily related to the sonic boom currently restricting their overland operation. One of these has been the U.S. FAA (Federal Aviation Administration) initiative for creating federal and international policies, regulations, and standards to certify the safe and efficient operation of civil supersonic aircraft (12), (19), (39) . The above-mentioned concepts of supersonic aircraft have been thought to carry out the long-haul flights with a rather positive balance between their effects and impacts. The effects have included travel speed, technical productivity, and economics. The impacts have included fuel consumption, GHG emissions, and noise. Some of these thoughts have also been systematically articulated as far-term (beyond the year 2035) research objectives of the two of the six strategic thrusts of the U.S. NASA strategic research program. The two relevant strategic thrusts have been "Innovation in Commercial Supersonic Aircraft" and "Transition to Alternative Propulsion and Energy" (16) . Consequently, the general requirements for design and operation of these aircraft have been as follows (4),(8),(16)-(23) : • Sufficient range for operating along the current and future long-haul, including the longest-haul non-stop routes; • Flight costs and related air fares comparable to those of their subsonic counterparts and consequently attractive for both airlines and different categories of air passengers (low, medium, high income; business, leisure); • Fuel consumption and related GHG emissions at the level at least neutral compared to their current subsonic counterparts; and • Noise around airports and the sonic boom overland within the existing and forthcoming regulatory limits, the former similar to their subsonic counterparts. The causal (regression) relationship between the technical productivity (TP), cruising speed (v), and capacity (S) of the past and current concepts of supersonic aircraft is estimated as follows (based on data in Table AI-1 (Appendix I)): As can be seen, the aircraft capacity (S) and cruising speed (v) have very strongly influenced on technical productivity (TP). Again, this influence is about three times greater thanks to the seat capacity (coefficient at the variable (S)) than thanks to the maximum speed (coefficient at the variable (v)). Additionally, the influence of speed on the technical productivity (TP) is for more than four times greater than in the case of subsonic aircraft (see Eq. 1a). In regard to economics, some estimates indicate that the price of currently developing Boom Aircraft will be about 200· 10 6 $US and that of the previous EC LAPCAT Hydrogen Mach 5 Cruiser A2 640 · 10 6 C =. Based on the advertised prices on particular long-haul routes, the average operating cost of Boom Aircraft could be less than about 1.5 -1.8 c / /snm and that of the EC Hydrogen Mach 5 Cruiser A2 about 6.2 c / /s-nm (s-nm -seat-nautical mile) (18), (22), (24), (25) . The fuel consumption, GHG emissions, and noise of these aircraft are expected to be in line with the above-mentioned requirements. In particular, the noise from the sonic boom is expected to be reduced to the level of about 70-80dB primarily through aircraft design and the increase in their cruising altitude, both of which could eventually allow for unrestricted overland operations (22) . Objectives The above-mentioned overview has indicated the existence of long-standing research efforts to develop the concepts of supersonic aircraft. However, those dealing with the systematic analysing, modelling, and comparing performance with those of the subsonic aircraft have been fragmentary or non-existent. This especially applies to the consideration of different operating scenarios including competition or eventually full replacement in the given (mainly long-haul) air route networks. Therefore, the objectives of this paper are to partially decrease this fragmentation and mainly increase interest in the topic within the academic community. As such, the paper provides a framework for the systematic examination of the performance of a given long-haul air route network, exclusively operated by the prospectively forthcoming supersonic aircraft and their subsonic counterparts, according to the "what-if" scenario. This includes definition of the indicators of the above-mentioned network performance, development of their analytical models, and application of the models to the selected network case. Assumptions The models of performance indicators of the given air route network are based on the following assumptions reflecting the "what-if" operating scenarios: • The air route network has a point-to-point spatial configuration consisting of airports as the O-D (origin-destination) nodes of aircraft/flights and their passengers, the long-haul air routes 2 as the network links as shown in Fig. 1; • The characteristics of both subsonic and supersonic aircraft/flights are considered in modelling the particular indicators; • The supersonic aircraft are fully operational including sufficiently (long) range for flying non-stop on all routes of the network; • The airlines operate either a fleet of subsonic aircraft powered by Jet A or a fleet of supersonic aircraft powered by LH 2 (Liquid Hydrogen) fuel; both fleets are homogenous, i.e. n consist of the same aircraft types; • The profiles of subsonic flights continue to be as at present both around the airports (following the SIDs (standard instrument departure) routes and STARs (standard terminal arrival route(s)), and en-route (4D RNAV trajectories). The profiles of supersonic flights need to be standardised. This implies that the ATC (air traffic control) would assign the aircraft dedicated three-dimensional (airspace) corridors mainly separated from the subsonic traffic. These would enable: i) taking-off from the origin airport and proceeding along the dedicated SIDs through the terminal area of the origin airport; ii) leaving the terminal area, climbing up to the cruising altitude while accelerating to the supersonic cruising speed; iii) cruising with the constant supersonic speed on the constant cruising altitude; iv) ending cruising and descending from the cruising altitude, while decelerating to the entry speed of the terminal area of the destination airport; v) entering the terminal area and proceeding along the dedicated STARs, again separated from the subsonic arriving traffic; and vi) entering the final approach path and landing. During take-off and landing, these aircraft are considered as in the heavy (or super heavy) wakevortex category. Ideally these aircraft could share the SIDs and STARs of their subsonic counterparts within the corresponding noise constraints (12), (39) ; a simplified scheme of the vertical profile of both categories of flights is shown in Fig. 2. • A single airline or several airlines and/or their alliances operate in the network. Their market relationships, such as collaboration or competition, are not considered. The number of scheduled flights and their load factors on each network route are the same; this implies equality/uniformity of the O-D passenger demand accommodated by these flights under the given conditions. • The direct fuel consumption and related GHG emissions of both categories of aircraft/flights are considered only. • The GHG from burning particular fuels (Jet A and LH 2 ) are assumed to impact the environment independently, i.e. without interrelating with each other (40) . • The airport airside infrastructure (runways, taxiways, and apron/gate complex) is assumed to be suitable for accommodating supersonic aircraft safely, effectively, and efficiently; in general some modifications of the apron/gate parking stands and provision of LH 2 fuel delivery would be needed. The models of performance indicators The analytical models of performance indicators for the air route network shown in Fig. 1 are developed for its representative (average) route. As such, they can be applied to each network route and estimated in both absolute and relative terms. The corresponding values for the entire network can be obtained by adding up these estimated values for all routes. The models of indicators of infrastructural and technical/technological performance are assumed to be implicitly given and therefore not modelled. Indicators of operational performance The operational performance indicators are passenger demand, flight frequency on a route and network aircraft turnaround time on a route, transport work, technical productivity, and the size of the aircraft fleet. Passenger demand and flight frequency on a route: The passenger demand on a network route (k) in the single direction, served by either category of aircraft during time ( τ), is assumed to be (Q k ( τ)) (37) . The flight frequency by either aircraft category to accommodate this demand can be estimated as follows (13) : k route in the given network (k = 1,2,. . ., K); τ time interval in which the flights by either aircraft category are scheduled on route (k) (h, day, year); λ k , S k average aircraft load factor and capacity, respectively, of a flight on route (k) carried out by either aircraft category (λ k ≤ 1.0) (-; seats/dep); K number of routes in the network. Aircraft turnaround time on a route: The turnaround time of an aircraft of either category on route (k) is expressed as follows: (1) and (2), respectively (nm, km); (1) and (2), respectively, on route (k), (h); handling time of an aircraft of either category at the apron/gate complex of the end airports, before operating on route (k) in direction (1) and (2), respectively, (h). The time (t k (R k/1 )) or (t k (R k/2 )) in Eq.3b is approximated as follows: t k/cl/. climbing time of a flight of either category operating on route (k) in the given direction (min); t k/cr/. cruising time of a flight of either category operating on route (k) in the given direction (min); t k/de/. descending time of a flight of either category operating on route (k) in the given direction (min); t k/LTO LTO (Landing and Take-Off) 3 cycle of a flight of either category before and after operating on route (k) (min) (i = 1, 2). The other symbols are analogous to those in the previous equations. The detailed analytical models for estimating the flight time components in Eq. 3c are given in Appendix II. Transport work, technical productivity, and size of the aircraft fleet: The transport work of the flight carried out on route (k) of the network by either category of aircraft during time ( τ) is estimated as follows: where all symbols are analogous to those in the previous equations. The technical productivity of the flight carried out on route (k) of the network by either category of aircraft during time ( τ) is estimated as follows: average flight speed carried out by either aircraft category on route (R k ) in both directions (1) and (2) (km/h or kt). From Eq. 3c, the average speed v k (R k ) in Eq. 3e is expressed as follows: From Eq. 3a and 3b, the required fleet of either aircraft category to serve the network under given conditions is expressed as follows: The other symbols are analogous to those in the previous equations. Indicators of economic performance The indicators of economic performance are flight cost, cost of passenger time, and contribution to GDP (gross domestic product). Flight cost: The total cost of a single flight carried out by either category of aircraft on route (k) of the network can be estimated as follows: where C k/F fixed cost of a flight carried out on route (k) ($US/flight); C k/o is the variable, i.e., operating cost of a flight carried out on route (k) ($US/flight). The fixed cost (C k/F) in Eq. 4a can be estimated as follows (42) : The other symbols are analogous to those in Eq. 3a. The operating cost (C k/o ) in Eq. 4a includes the cost of fuel, crew, maintenance, insurance, fees (airport, ATC), and others (www.PlaneStats.com). For the flights carried out by the subsonic aircraft this cost is usually estimated by the empirical data. For the flights carried out by the supersonic aircraft, this cost can be approximated by using the corresponding available data in combination with an analogy with the cost of their subsonic counterparts (24) . From Eq. 4a, the average cost is expressed as follows: c k average cost of a flight carried out on route (k) of the network ($US/p-km). The other symbols are analogous to those in the previous equations. As mentioned above, in combination with the route length, the average cost ( c k ) in Eq. 4c can be considered as the basis for setting up the airfares. Cost of passenger time: Using supersonic instead of subsonic flights is expected to bring savings in the passenger time and related costs. These average savings by a flight of either category carried out on route (k) can be estimated as follows: t * k/1 t * k/2 flight time on route (k) by subsonic (i = 1) and supersonic (i = 2) aircraft, respectively, (h) (t * k/1 > t * k/2 ); α k average value of time of a user/passenger travelling on route (k) ($US/h-p). The other symbols are analogous to those in the previous equations. Contribution to GDP: The average contribution of the subsonic or supersonic flight(s) carried out in the network to the GDP is estimated as follows: r GDP average GDP generated by commercial air transportation in the given region ($US/pkm); GDP total GDP generated by commercial air transportation in the given region during the specified period of time ($US/year); V RPK output of commercial air transportation in the given region during the specified period of time (RPK/year) (RPK -revenue passenger kilometre). Overall social-economic feasibility: The overall social-economic feasibility of the subsonic or supersonic flight(s) carried out on route (k) of the network is estimated as the difference between its total cost and contribution to GDP. From Eq. 4 (c, d, e) and 5 d (see below) this equals: c k/e average cost of GHG emissions of a flight carried out on route (k) of the network ($US/p-km). The other symbols are analogous to those in the previous equations. If ( c k ) is positive, the flight on the route (k) is overall social-economically feasible, and vice versa. Indicators of environmental performance The indicators of environmental performance are fuel consumption, emissions of GHG, and contribution to the global warming and climate change. Fuel consumption: The fuel consumption of subsonic flights in the network is estimated by using the available empirical data. That of the supersonic flights is estimated by the analytical models considering the mechanical forces acting on the aircraft during the particular phases of flight-climb, cruise, and descend-and the corresponding energy consumption. The summed quantity is then increased for the factor including the fuel consumed during the LTO cycle(s). In general, in each of the above-mentioned flight phases the fuel consumption is estimated as follows: After expanding Eq. 5a in Appendix III for the particular flight phases -climbing (cl), cruising (cr), descending (de), and (LTO) cycle, the total fuel consumption of a flight carried out by supersonic aircraft on route (k) of the network can be estimated as follows: GHG emissions: GHG emissions by subsonic aircraft can be estimated by using available empirical data. Based on Eq. 5b, the GHG emissions by both subsonic and supersonic flight are estimated as follows: If internalised, the total and average cost of GHG emissions of a flight carried out on route (k) by either category of aircraft can be estimated as follows: c e/m · e m and c k/e = C k/e R k · λ k · S k · · · (5d) c e/m unit charge of the (m)-th GHG ($US/kg, $US/ton). The other symbols are analogous to those in the previous equations. Contribution to global warming and climate change: The GHG emitted by subsonic or supersonic flight powered by either type of fuel (Jet A or LH 2 ) contribute to global warming and climate change. Each GHG has its GWP (global warming potential) estimated for the future long-term period (for example, 100 years ahead) (43), (44) . Based on Eq. 5c, the GWP of any subsonic or supersonic flight carried out on route (k) can be estimated as follows (tons of GHG/flight): GWP m Global Warming Potential of the (m)-th GHG (-). The other symbols are analogous to those in the previous equations. The relative savings in GWP by carrying out the supersonic (i = 2 -LH 2 fuel) instead of the subsonic (i = 1 -Jet A fuel) flight(s) on route (k) of the network are estimated as follows: All symbols are analogous to those in the previous equations. Indicators of social performance The indicators of social performance are noise, congestion and delays, and safety. Noise: The noise produced by the subsonic aircraft around airports has been permanently regulated and used as the criteria for their (noise) certification (5) . The noise by the forthcoming supersonic aircraft, in addition to that around airports, has been and will continue to be the subject to specific regulation along the (overland) segments of air routes due to the sonic boom (12), (39) . This noise by a supersonic flight passing above an observer on the ground can be estimated as follows (45) : At present, the costs of noise as an externality by supersonic aircraft are quite uncertain and are therefore not elaborated in the given context. Congestion, delays and safety: Both categories of flights are carried out under the equivalent operational conditions at all airports of the network. As mentioned above, they are assumed to be "ultimately" free from the substantial congestion and delays. The same applies to their safety, i.e. the risk of and actual occurrence of the air traffic incidents/accidents. Therefore, the corresponding indicators of this performance and related costs/externalities are not elaborated. Inputs The indicators of particular performance of a given air route network are estimated by two categories of data: real-life input data on subsonic flights and hypothetical input data on "what-if" scenario-based supersonic flights. In both cases adjustments are made to reflect operations of the network and flights in the year 2050. This is assumed to be the year that supersonic flights will be launched. The input data is also categorised in regard to the particular performance. Infrastructural performance The indicators of infrastructural performance are represented by the characteristics of the existing air route network shown in Fig. 3 and given in Table 1. The same network is assumed to be operated exclusively either by the above-mentioned subsonic aircraft or their supersonic counterparts in the year 2050. Technical/technological performance The fleet of subsonic aircraft contains the average aircraft type based on Eq. 1 (a, b, c). The simplified layout of considered supersonic aircraft is shown in Fig. 4 (the EC's LAPCAT Hydrogen Mach 5 A2 concept) (24), (25), (32) . Additionally, the design-related characteristics of an average subsonic and supersonic aircraft belonging to the corresponding fleets are given in Table 2. On the one hand, these characteristics can be considered as inputs; on the other, they can represent indicators of technical/technological performance of the network and flights. Operational performance The inputs for estimating the indicators of operational performance of the network are synthesised from the relevant empirical data (subsonic flights) and the hypothetical "what-if" operational scenario-based data (supersonic flights) given in Table 1 Economic performance Flight cost: The inputs for estimating the total operating cost of subsonic flight(s) are derived from Eq. 1b. The inputs for estimating the total operating cost of supersonic flight(s) operated at the speed M = 2.4 are derived as follows. The price of a supersonic aircraft including the capital maintenance cost during the lifecycle is assumed to be: P = 450 · 10 6 $US. This is similar to that of the A380 aircraft (46),(47) . The aircraft residual value at the end of useful life of: UL = 20 years is assumed to be: RV = 10%, which gives ADR (Annual Depreciation Rate): = 4.5%/year (42) . The inputs for estimating the variable cost component are as follows: the fuel cost -1.00 and 0.85 $US/kg of Table 1 Characteristics of the existing long-haul air route network and subsonic non-stop flights in the given example (Fig. 2) (49) . The inputs for both subsonic and supersonic flights are adjusted to the prospective conditions in the year 2050. Cost of passenger time: The inputs for estimating the prospective savings in the cost of passenger time if using supersonic instead subsonic flights are represented by the average value of passenger time of: α k = 74 $US/h-p. Based on 50% medium and 50% high income passengers, both performing 50% business and 50% leisure trips. This value is assumed to be also relevant in the year 2050 (h -hour; p -passenger) (50), (51) . Environmental performance Fuel consumption: The regression equation in Eq. 1c and the inputs in Table 2 are used for estimating the fuel consumption of subsonic flight(s). The inputs in Table 2, 3, and 4 are used in the corresponding models (Appendix III) to estimate the fuel consumption of the supersonic flight(s). (10 3 feet)); c the climb/descend time: t (H 1 , H 2 H 2 is the initial and the end altitude, respectively) (8) ; d attack angle is = 4 0 ; e At FL 60-65 (10 3 ft); f (28) ; g (24); (25) . Table 4 Characteristics of aircraft fuels, emissions of GHG, costs/externalities, and GWP (Global Warming Potential) in the given example ( (43), (44) ; e high impact (48), (73) ; f high Impact (74) . GHG emissions, contribution to global warming and climate change, and costs/externalities: The inputs for estimating GHG emissions of both subsonic and supersonic flights relate to the characteristics of Jet A and LH 2 fuel, their contribution to global warming and climate change and the related costs as externalities is given in Table 4. The related cost of emissions of particular GHG is adjusted for the year 2050. Social performance The cruising altitude ranging as: H = 36 -60·10 3 ft and speed of M = 2.4 are used as the "what-if" scenario-based inputs for estimating the level of noise produced by supersonic flight(s) as experienced by an observer on the ground. Analysis of the results Based on the above-mentioned inputs, the performance indicators are estimated for the average (representative) route of the network where subsonic or supersonic flights are exclusively carried out. These estimates, however, do not compromise in any way the relevance of findings and the related conclusions referring to the entire network. If needed, the corresponding inputs for estimating the performance indicators of each individual route can be estimated in order to obtain the corresponding totals for the entire network. Infrastructural and technical/technological performance As mentioned above, the indicators of infrastructural and technical/technological performance are not particularly elaborated. They are assumed as given in the inputs for estimating those of the other performances. Operational performance The inputs in Tables 1, 2, and AI-1 (Appendix I) are used for estimation of the indicators of operational performance as given in Table 5. As can be seen, the flight time by the supersonic aircraft operating at a speed of M = 2.4 would be almost two times shorter than that of their subsonic counterparts. Consequently, thanks to the shorter route turnaround time, the required fleet of supersonic aircraft would be lower by about 46%. The transports work on an average route and consequently in the network would be equal for both categories of flights. This is mainly due to the equal flight frequencies, seat capacity, load factor, and the average route length. However, thanks to the higher cruising speed, the technical productivity of supersonic flights would be about 2.9 times greater than that of their subsonic counterparts. Table 5, carrying out 329 flights/year on each of 25 routes of the network in Table 1, is estimated as follows: C F/2050 = [24·(450·10 6 ) · 0.045]/(329·25) = 59088 ($US/flight) (24) , the crew cost: c cw/2050 = 2000 $US/h · 8.681 h = 17362 $US/flight (56) , and the fuel cost: c fc/2050 = 93516 kgLH 2 /flight · 1$US/kg LH 2 = 93516 $US/flight (see also below) (48) . If the above-mentioned (three) cost components are assumed to account for about 70% of the total flight operating cost, the corresponding total cost will be: C T/2050 = (C F/2050 + c cw/2050 + c fc/2050 )/0.7 = (59088 + 17362 + 93516)/0.7 = 242809 $US/flight. The average cost of single flight carried out on the route: R = 6532 nm (12097 km) (by the aircraft of: S = 300 seats and load factor: = 0.70 is equal to: c −/2050 = C T/2050 /(R · λ · S) = 242809/ (6532 · 1.852 · 0.7 · 300) = 0.096 $US/p-km. If the fuel cost is 0.85 $US/kgLH 2 (48) , the corresponding average cost of the supersonic flight would be: c −/2050 =155939/(6532 · 1.852 · 0.7 · 300) = 0.061 $US/p-km. Figure 5 shows these average cost per flight. Economic performance As can be seen, at the frequency of 1 flight/day, the average operating cost of the supersonic flight, depending on the fuel cost, would be between 18% (0.85 $US/kgLH 2 ) and 85% (1 $US/kgLH 2 ) higher than that of the subsonic flight. This example indicates that the supersonic flights would generally be economically inferior to their subsonic counterparts under the given conditions. Cost of passenger time: The cost of passenger time and the potential savings in this cost by using the supersonic instead of the subsonic flight(s) are given in Table 6. As can be seen the cost of passenger time would be about 50% lower for the supersonic flights. The corresponding savings in this cost would be about 4.5 c / /p-km. Environmental performance The fuel consumption of subsonic flights in the given example is derived from Eq. 1c, while respecting the prospective improvements in the aircraft fuel efficiency of r fc ≈ 0.4 by the year 2050 (57) . Consequently, the fuel consumption of a flight carried out by an aircraft with a seat capacity of S = 300 seats along the route R = 12097km (6532nm) would be FC 2050 (R) = 55.3 tons/flight (Jet A fuel). Under the same conditions, by applying the models in Appendix III to the inputs from Table 4, the average fuel consumption of the supersonic flight is estimated as: FC 2050 (R) = 1.02 (FC cl + FC cr + FC de ) = 1.02 · (9245 + 76089 + 6339) = 1.02 · 91687 ≈ 93516kg/flight (LH 2 ). In this case the factor 1.02 is applied to include the fuel consumed during the LTO cycle. The inputs in Table 4 and the above-estimated fuel consumption are used for estimating GHG emissions and their absolute and relative contribution to global warming and climate change as shown in Fig. 6 (a, b). Figure 6a shows that the average fuel consumption by the supersonic flight would be about 70% higher than that of the subsonic flight. The corresponding GHG emissions would be also higher by about 19%. At the same time, the GWP of the subsonic flight would be about 16 times higher than that of its supersonic counterpart (CO 2 and H 2 O dominate in Jet-A and H 2 O in LH 2 fuel). Figure 6b shows that despite the higher fuel consumption and related GHG emissions, the supersonic flight(s) could substantially contribute to savings (about 94%) in the overall GWP and consequently global warming and climate change, both compared to their subsonic counterparts. Social performance The noise generated by the supersonic flight(s) operating on an average route of the network is shown in Fig. 7 (a, b). Figure 7a shows that the noise produced by a supersonic flight passing above an observer on the ground would decrease with the increase of the cruising altitude. Figure 7b shows that increasing the distance between the overflying aircraft and an observer on the ground, due to an increase of the cruising altitude, would contribute to the decreasing of the experienced noise of an observer on the ground. The levels of noise generally between 83 and 88 dBA do not reflect barely audible explosion (physical phenomenon) and have been the subject of undesirable psychological reactions. As such, these noise levels appear to be about 6-13% above U.S. NASA's suggested tolerable levels from the sonic boom, set at about 78 dB (39) . Additionally, the size of the area covered by the noise from the Mach cone needs to be taken into account. Both these are and will certainly be used as inputs in considering the possible noise constraints on operations of the supersonic aircraft (12), (39),(58) . Some derived indicators of performance Aircraft design: The maximum take-off weight, payload including only the passengers and their baggage, and the fuel consumption allow for the estimation of some the design-related derived indicators of the technical/technological performance of both subsonic and supersonic aircraft. These are expressed by the ratios such as: PL/MTOW (Payload/Maximum Take-Off-Weight), FW/MTOW (Fuel Weight/Maximum Take-Off-Weight), and PL/FW (Payload/Fuel Weight) as shown in Fig. 8. As can be seen, the particular ratios would be quite different for subsonic and considered supersonic aircraft. For example, the ratio PL/MTOW is about 21% for subsonic and 8% at supersonic aircraft. (Full payload for supersonic aircraft: 31500kg; 1 passenger + baggage = 105kg (59) ). The ratio FW/MTOW is about 45% for subsonic and 23% for supersonic aircraft, the latter also influenced by the fuel type. Finally, the rate PL/FW is about 46% for subsonic and 34% for supersonic aircraft, the latter again influenced by the fuel type. Economics and environment: The derived indicators of economic and environmental performance of an average route and the entire network are considered through the relationship between the flight operating cost, the cost of GHG emissions, i.e. externalities, and the savings in the cost of passenger time, as shown in Fig. 9 (a, b). Figure 9 shows the difference between the average total cost and its components of the subsonic and supersonic flight on an average route of the network. As can be seen, the average total cost would be lower for the supersonic than for the subsonic flight, by about 20%. This would be achieved thanks to its lower externalities and higher potential savings in passenger time despite the higher operational costs. Consequently, this example indicates that internalising all costs of the particular actors/stakeholders involved in both the demand and the supply side of the given air route network could eventually make supersonic flights economically feasible under the given conditions. The overall social-economic feasibility: The relationship between the average total monetary contribution to the GDP and the average total cost of both subsonic and supersonic flight(s) carried out on an average route of the network is shown in Fig. 10. As can be seen, in the cases of both subsonic and supersonic flights the average contribution to GDP would be overall higher than the average total cost thus making their difference generally net positive. This difference would be about 5% greater for supersonic flights compared to subsonic flights. The above figures indicate that supersonic flights could eventually be overall social-economically feasible but only under the considered circumstances. The above-mentioned results enable synthesising some qualitative pros and cons of supersonic flights, relevant for the particular actors/stakeholders involved, which are summarised as follows: These pros and cons indicate that the full implementation of the future supersonic commercial flights is and will remain a challenge for all above-mentioned main actors/stakeholders involved. CONCLUSIONS This paper has modelled the performance of a given long-haul air transport network operated by existing subsonic and prospectively forthcoming supersonic commercial aircraft. Infrastructural, technical/technological, operational, economic, environmental, and social performance has been considered. Analytical models of the performance indicators have been developed. The indicator models have been applied in estimating the performance of an existing air route network, consisting of 25 long-haul routes operated exclusively by commercial subsonic aircraft (currently) and supersonic (prospectively by the year 2050) aircraft. The input for the estimation of the performance indicators of the network and its average route was empirical data for subsonic aircraft/flights, available data from the corresponding design concepts of supersonic aircraft/flights, and the elements of the specified "what-if" operational scenarios The subsonic aircraft are assumed to be powered by Jet-A fuel and the supersonic aircraft by LH 2 (Liquid Hydrogen) fuel. The results have indicated that: • The supersonic and subsonic flights carried out on the same routes with the same frequency, seat capacity, and load factor would perform the same transport work. The supersonic flights would have about 2.9 time higher technical productivity thanks to the much higher cruising speed. The required fleet size of supersonic aircraft would be about 46% smaller than that of the subsonic ones, thanks to the higher cruising speed and the consequently shorter turnaround time on the route(s). • The operating cost of supersonic flights, depending on the fuel cost, would be about 18-85% higher than that of the subsonic flights. This has confirmed concerns about their future economic feasibility from the airline perspective. However, these flights would provide the substantial savings in passenger time costs, of up to 46%. • The fuel consumption and related GHG emissions of the supersonic flights would be about 70% and 19%, respectively, higher than those of the subsonic flights. This would primarily be due to much higher fuel consumption of LH 2 fuel, mainly caused by much higher operating speeds despite its much higher energy content, compared to Jet-A fuel. However, despite being higher, the costs GHG emissions have shown to be much lower thanks to the prospective charges/externalities. Combined with their GWP (global warming potential), GHG emissions by supersonic flights would contribute to savings in global warming and climate change of up to about 94%. The supersonic aircraft would generally not substantially compromise land use as an environmental externality. The possible impact of the logistics of supplying LH 2 fuel in this context remains to be further considered. • The noise levels produced by supersonic aircraft at airports and during cruising at higher altitudes would be about 6-13% above certain prescribed tolerable level(s). Therefore, this (noise) externality will continue to be an important subject in the further dealings with commercialization of supersonic aircraft/flights. • The selected derived design-related indicators of the technical/technological performance of supersonic aircraft, such as the payload/weight, fuel/weight, and payload/fuel ratio would be about 13%, 22%, and 12%, respectively, lower than that of subsonic aircraft. This indicates inherent specificities and challenges in their future design. • If all the above-mentioned costs were fully internalised as externalities, supersonic flights could be about 20% more beneficial than their subsonic counterparts. This would be the
2020-07-09T09:09:55.935Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "8a9c9edee97004dbf5b8499032171971f4052fbc", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/88B64037E205A73FECFCDCB82063B0E0/S0001924020000469a.pdf/div-class-title-modelling-performance-an-air-transport-network-operated-by-subsonic-and-supersonic-aircraft-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "9e7ea7f522c36bcaf1b239ea824a109b56f32dee", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Environmental Science" ] }
237406950
pes2o/s2orc
v3-fos-license
Prognostic Value and Immunological Characteristics of a Novel RNA Binding Protein Signature in Cutaneous Melanoma Background The existing studies indicate that RNA binding proteins (RBPs) are closely correlated with the genesis and development of cancers. However, the role of RBPs in cutaneous melanoma remains largely unknown. Therefore, the present study aims to establish a reliable prognostic signature based on RBPs to distinguish cutaneous melanoma patients with different prognoses and investigate the immune infiltration of patients. Methods After screening RBPs from the Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) databases, Cox and least absolute shrinkage and selection operator (LASSO) regression analysis were then used to establish a prediction model. The relationship between the signature and the abundance of immune cell types, the tumor microenvironment (TME), immune-related pathways, and immune checkpoints were also analyzed. Results In total, 7 RBPs were selected to establish the prognostic signature. Patients categorized as a high-risk group demonstrated worse overall survival (OS) rates compared to those of patients categorized as a low-risk group. The signature was validated in an independent external cohort and indicated a promising prognostic ability. Further analysis indicated that the signature wasan independent prognostic indicator in cutaneous melanoma. A nomogram combining risk score and clinicopathological features was then established to evaluate the 3- and 5-year OS in cutaneous melanoma patients. Analyses of immune infiltrating, the TME, immune checkpoint, and drug susceptibility revealed significant differences between the two groups. GSEA analysis revealed that basal cell carcinoma, notch signaling pathway, melanogenesis pathways were enriched in the high-risk group, resulting in poor OS. Conclusion We established and validated a robust 7-RBP signature that could be a potential biomarker to predict the prognosis and immunotherapy response of cutaneous melanoma patients, which provides new insights into cutaneous melanoma immunotherapeutic strategies. INTRODUCTION Cutaneous melanoma is the most aggressive and dangerous skin malignancy with high levels of morbidity, and its incidence continues to increase each year (Swetter et al., 2019). Patients with early-stage can usually be cured by surgical resection, and more than 90% of patients survive for more than 5 years (Siegel et al., 2020). Once metastasis occurs, patients suffer from a dismal prognosis with a median overall survival (OS) of only 6 to 10 months, and the 5-year survival rate is dismal (<10%) (Schadendorf and Hauschild, 2014;Tang et al., 2016). Generally, the risk stratification and prognosis of patients with melanoma are mainly determined by clinical characteristics, such as Breslow thickness, ulcers, and lymphatic vascular infiltration (Hyams et al., 2019). Nevertheless, due to the phenotype and genetic heterogeneity of malignant melanoma, conventional clinicopathological features are still limited or restricted in their ability to accurately predict individual outcomes (Diamantopoulos and Gogas, 2016). Therefore, these sobering data highlight the urgent need for the development of novel malignant melanoma-specific genomic models to accurately predict clinical outcomes of melanoma patients and provide a guide to more effective individual therapies. RNA binding proteins (RBPs) effectively and ubiquitously regulate transcripts throughout their life cycle (Corley et al., 2020). RBPs contain a large class of more than 2,000 proteins that play vital roles in multiple RNA processes, including stability, transport and translation, splicing, and degradation of RNAs (Mohibi et al., 2019;Corley et al., 2020). Recent studies have shown that RBPs not only affect normal cell processes but also have become major players in the initiation and progression of cancer (Masuda and Kuwano, 2019;Schuschel et al., 2020;Weiße et al., 2020). Dysregulation, localization, or post-translational modification of RBPs can not only increase the expression of oncogenes but also promote tumorigenesis by reducing the expression of tumor suppressor genes. For example, the RBPs RBM38 and RBM24, as single members of the RBP family containing RRM, have similar functions by regulating the same target. Both RBM38 and RBM24 can be induced by the tumor suppressor p53, thereby inhibiting the translation of p53 mRNA (Zhang et al., 2011;Zhang M. et al., 2018). RBM38 promotes or inhibits tumor formation mainly depends on the state of p53 because RBM38 can inhibit the expression of wild-type and mutant p53 through mRNA translation (Zhang et al., 2014). PCBP1 has been reported to be a tumor suppressor to inhibit tumorigenesis, development, and metastasis in several types of cancer . Elevated PCBP1 was found to promote p27 mRNA stability and translation, but inhibit the expression of oncogenic STAT3 isoform and MAPK1 (Shi et al., 2018;Wang et al., 2019). The RBPs hnRNP Abbreviations: RBPs, RNA binding proteins; TCGA, The Cancer Genome Atlas; GEO, gene expression omnibus; LASSO, least absolute shrinkage and selection operator; TME, tumor microenvironment; OS, overall survival; GSEA, gene set enrichment analysis; AUC, area under curve; KM, Kaplan-Meier; PCA, principal component analysis; ROC, receiver operating characteristic; BLCA, bladder urothelial carcinoma; BRCA, breast invasive carcinoma; OV, ovarian serous cystadenocarcinoma; MESO, mesothelioma; LUAD, lung adenocarcinoma. K has been reported to upregulate the expression of several oncogenes, such as MYC and Src (Adolph et al., 2007;Gallardo et al., 2020). However, the mechanisms by which most RBPs cause cancer is still unknown. In the present study, we constructed a robust 7-RBP prognostic signature based on public datasets. The signature was verified in an independent external cohort and indicated a promising predictive ability. Then, a nomogram combining risk score and clinicopathological characteristics was then established to evaluate the 3-and 5-year OS in cutaneous melanoma patients. Analyses of immune infiltrating, immune-related pathways, TME, immune checkpoint, and drug susceptibility revealed significant differences between the two groups. GSEA analysis revealed that basal cell carcinoma, notch signaling pathway, melanogenesis pathways were enriched in the high-risk group, resulting in poor OS. Data Source and Preprocessing The RNA-sequencing profiles and clinical data for TCGA SKCM cohort were obtained from The Cancer Genome Atlas (TCGA) database 1 . SKCM-related datasets GSE65904 from the GEO database 2 were used as an independent external validation set. For data cleaning, samples with missing clinical data were excluded. After preprocessing, there were 413 samples in the TCGA dataset, 210 in the GSE65904 dataset. The clinical statistics information is shown in Table 1. A total of 1542 genes coding for RBPs were obtained from the previous publications (Baltz et al., 2012;Castello et al., 2012;Kwon et al., 2013;Cunningham et al., 2015). Prognostic Signature Construction To screen the prognostic related RBPs, univariate Cox regression analysis was conducted to evaluate the relationship between the expression level of RBPs and the OS of patients in the TCGA cohort. P-value < 0.05 was set as cutoff criteria. To minimize the risk of over-fitting and remove highly related genes, the "glmnet" R package was used for Lasso regression analysis, and the stepwise multiple Cox regression method was used to establish the optimal model. Risk score = Exp1 × β1 + Exp 2 × β2 + · · · + Exp n × βn. β is the regression coefficient, while Exp represented gene expression level. Based on the median of estimated risk score, patients were categorized into low-and high-risk subgroups. Survival analyses were carried out for the comparison of the prognostic outcomes between two subgroups using the "survival" and "survminer" R packages. Further, the ROC curves were applied to assess the predictive capabilities of the above signature by "SurvivalROC" R package. In addition, principal component analysis (PCA) and t-SNE were carried out to explore the different gene expression patterns of the two risk groups. Validation of Prognostic Signature Using the same method as that used in the training dataset, the risk score of each patient in the GSE65904 validation dataset and the corresponding median risk scores were calculated separately, after which the patients were grouped two groups (high and low). The survival curves of the two groups were plotted using the Kaplan-Meier method. Time-dependent area under the curve (AUC) analysis was performed to assess the predictive performance of the model. Immune Infiltrating Analysis Given the critical role of immune infiltrating cells in cutaneous melanoma tumorigenesis and development, the abundance of 22 immune cell types were calculated by CIBERSORT 3 algorithm (Newman et al., 2015). The tumor microenvironment (TME) scores of each single melanoma patient were estimated using the ESTIMATE algorithm (Yoshihara et al., 2013). In addition, the expression of the immune checkpoint was used to examine the molecular relationship with the prognostic signature. Drug Susceptibility Analysis We use the R software package "pRRophetic" to predict the antineoplastic drug susceptibility for patients with the highand low-risk groups. The regression analysis was conducted to obtain the half-maximal inhibitory concentration (IC50) estimated value of each specific antineoplastic drug treatment. Development and Validation of a Prognostic Nomogram The univariate and multivariate Cox regression analyses were conducted to detect whether this signature can act as an independent prognostic factor for cutaneous melanoma patients. Stratification analyses were performed to further validate the predictive accuracy of the model. These variables include age (≤60 and >60 years), gender (male and female), tumor stage (I-II and III-IV), Breslow depth (≤1.5, 1.5-3.0, and >3.0), tumor type (primary tumor, regional cutaneous, regional lymph node, and distant metastasis), ulceration (yes and no), and tumor status (tumor-free and with tumor). To quantitatively estimate cutaneous melanoma prognosis in clinical practice, a prognostic nomogram that integrated both the signature, age, and tumor stage was generated based on the multivariable Cox regression analysis. The ROC curve and calibration plot were drawn to estimate the predictive performance and discriminating ability of the nomogram scoring system. Gene Set Enrichment Analysis (GSEA) Gene set enrichment analysis software (version 4.1.0) was utilized to investigate the meaningful biological processes that might be involved in causing the different prognoses between low-and high-risk groups based on the Hallmarks gene collection file (C2cp.kegg.v7.2.symbols.gmt). The number of permutations was set to 1,000 times, and the "phenotype labs" were set to high-risk score versus low-risk score. The outcomes meet FDR q < 0.25 and NOM p < 0.05 were considered significant. Statistical Analyses All statistical analyses were implemented using R version 4.0.4. We used the Chi-squared test and Fisher's exact test to evaluate the differences in categorical data between different datasets and groups and the Mann-Whitney U test or Student t-test to compare the quantitative data. Figure 1 showed the research idea about this study. A systematic analysis was carried out for the critical roles and the potential prognostic values of RBPs played in cutaneous melanoma. At first, we downloaded transcriptome information and clinical data from TCGA and GEO datasets. Then, a total of 1541 RBPs were acquired from previous publications, which were integrated with the mRNA from the TCGA database to obtain 1492 RBPs in cutaneous melanoma. A total of 1374 RBPs were identified by taking the intersection of 1492 RBPs and mRNAs from the GEO dataset. Construction of the RBPs-Related Signature The relationship between the expression of these 1374 RBPs and OS was analyzed by univariate Cox regression. As a result, 35 RBPs were left as prognostic-associated candidates (P < 0.001) (Figure 2A). Then, sixteen prognostic RBPs were conducted with LASSO regression analysis ( Figure 2B) and partial likelihood deviance ( Figure 2C). Subsequently, multivariate Cox regression analysis on the 16 RBPs was conducted to further select a robust and effective risk model for prognosis prediction and identified 7 RBPs (RPF1, RBM43, RPP25, APOBEC3G, PATL2, FBXO17, and NYNRIN) ( Figure 2D). A prognostic signature based on 7 RBPs, including 3 high-risk RBPs (RPP25, FBXO17, and NYNRIN) and 4 low-risk RBPs (RPF1, RBM43, APOBEC3G, and PATL2), and the risk score were obtained. The risk score was obtained in line with the expression quantities of the 7 RBPs in various samples and the correlation coefficients. The risk After scoring each patient's risk through the signature, patients above and below the mean risk score were assigned to the high-and low-risk group, respectively. Figure 2E showed the status and survival time of patients in the training set. PCA and t-SNE analyses indicated that discernible dimensions between high-and lowscore patients (Figures 2F,G). Comparing the KM curves of the two groups, we found a significant difference in the OS of patients between the two groups. Patients with highrisk have an expressively lower OS than those with low-risk (P < 0.001, Figure 2H). The AUC of the ROC curves was 0.805 in the training cohort, suggesting the great predictive performance of this signature (Figure 2I). External Validation of the Prognostic Significance of RBPs The GSE65904 dataset was separately used to determine the validity and robustness of the signature as an independent validation cohort (Figures 3A-F). The risk scores, survival status of patients, and PCA and t-SNE demonstrating the variation tendencies of high-and low-risk groups were, respectively, as shown in Figures 3C,D. As shown in Figure 3E, the OS between the high-and low-risk groups was proved to be statistically different (P < 0.001), which is consistent with the training set. The AUC for this risk score signature is 0.718, proving that the model has a promising predictive value ( Figure 3F). To understand whether this newly identified RBP signature can specifically predict prognosis of cutaneous melanoma or has general prognostic value for other cancers, we evaluated prognostic value of the RBP signature in 32 cancer types using the TCGA pan-cancer data and found that the signature can also predict prognosis of other 5 types of cancer (Supplementary Figures 1, 2), including bladder urothelial carcinoma (TCGA-BLCA), breast invasive carcinoma (TCGA-BRCA), ovarian serous cystadenocarcinoma (TCGA-OV), mesothelioma (TCGA-MESO), and lung adenocarcinoma (TCGA-LUAD). Taken together, these results further validated that signature has high validity for survival prediction in cutaneous melanoma. Correlation of the Signature With TME in Cutaneous Melanoma To explore the role of the signature on the TME of cutaneous melanoma, we analyzed the association between the signature, the abundance of 22 immune cells, 13 immune-related pathways, and TME score (Stromal score, Immune score, and Estimate score). Interestingly, high risk score was positively correlated with M0 macrophages, M2 macrophages, resting mast cells, activated mast cells, neutrophils, and negatively corrected with B cells memory, M1 macrophages, Monocytes, activated NK (Figure 4A), and 13 immune-related pathways ( Figure 4B). In addition, several vital immune-checkpoint-relevant genes were also analyzed and indicated that the risk score was significantly associated with the expression of the checkpoint markers, PD-1 (PDCD1), PD-L1 (CD274), and CTLA-4 ( Figure 5A), implicating the potential roles of the signature model in the response to immunotherapy in cutaneous melanoma patients. Finally, the signature was negatively associated with immune score (P < 0.001), stromal score (P < 0.001), and ESTIMATE score (P < 0.001; Figures 5B-D). Drug Susceptibility Analysis To manifest the application of antineoplastic drugs in melanoma patients hierarchically, we explored the antineoplastic drug susceptibility in the high-and low-risk groups based on the prognostic signature. As shown in Figure 6, after comprehensive analysis for the antineoplastic drugs, we noted that Gefitinib, Bosutinib, Cisplatin, Embelin, Etoposide, AKT inhibitor VIII, and Gemcitabine were more susceptible to the patients in the low-risk group compared with the patients in the high-risk group, while patients with high-risk seem more vulnerable to Docetaxel, Paclitaxel, and Erlotinib. Development of a Nomogram for Prognosis Prediction Univariate and multivariate Cox regression analyses were performed to determine whether the risk scores were independent risk factors of melanoma. The result confirmed that age, tumor stage, and risk score were independent prognostic factors (Figures 7A,B). To assess whether signature retained its prognostic value in various subgroups, we conducted a clinical stratification analysis. Kaplan-Meier analysis indicated that the OS of the high-risk score group was remarkably shorter than that of the low-risk score group ( Figure 7C). For the establishment of quantitative methods for cutaneous melanoma prognosis, a prognostic nomogram was established according to age, tumor stage, and risk score (Figure 8A). The TME score (Immune score, Stromal score, and Estimate score). The AUCs of the nomogram at 3-and 5-year survival times were 0.739 and 0.728, respectively ( Figure 8B). We used the calibration curve to show the prediction value of the nomogram, which results indicating that curve of the nomogram at 3, and 5 years OS were close to 45 • line (Figures 8C,D), indicating high predictive accuracy. Gene Set Enrichment Analysis Gene set enrichment analysis was used to discover the underlying biological mechanisms to further understand the development of cutaneous melanoma and the reasons for the different prognoses of patients with different scores. As shown in Figure 9, multiple significant signaling pathways were enriched in high-and lowrisk group patients, but there was a different enrichment in the two groups. The high-risk group was mainly involved in basal cell carcinoma, notch signaling pathway, melanogenesis, and purine metabolism, while toll-like receptor, Jak-STAT, chemokine, natural killer cell-mediated cytotoxicity signaling pathway were the most significantly enriched signaling pathways in the low-risk group (Figure 9). DISCUSSION Despite breakthrough advancements in cutaneous melanoma treatment, some cutaneous melanoma patients still have a poor prognosis, especially when metastasis is detected. Due to the phenotype and genetic heterogeneity of malignant melanoma, conventional clinical features are still limited to accurately predict individual outcomes and survival. Accurate prognostic prediction and individualized clinical treatment strategy are the basis of precision medicine (Arnedos et al., 2014). Most of the established clinical markers for treatment response and prognosis of cutaneous melanoma are based on clinical features, and their accuracy and specificity are limited. Traditional AJCC TNM staging is mainly based on anatomical information and cannot adequately assess the prognosis of cutaneous melanoma patients. Therefore, exploring the molecular mechanisms and screening reliable melanoma-specific genomic signatures are urgently needed to improve prognosis assessment and individualized treatment. In recent years, with in-depth research on the regulatory role of RBPs in various RNA processes, researchers gradually realized the importance of RBPs in cancer. However, a systematic analysis of the relationship between RBPs and cutaneous melanoma is lacking. In the present study, we established an RBP-related prognostic signature, assessed the correlation between this model and prognosis as well as the immunotherapy response, and evaluated the potential clinical applications of the model. The high-throughput "omics" data combined with bioinformatic analysis provided valid and economical methods to depict the prognostic value of RBPs in cutaneous melanoma. First, we combined the mRNA expression profiles of patients retrieved from the TCGA database and identified 1374 RBPs. Then, univariate, LASSO, and multivariate Cox regression analyses were carried out to develop a 7-RBP signature. The signature could classify patients into different risk subgroups with significantly different prognoses both in the TCGA and GEO sets. The reliability of the signature in predicting OS of melanoma patients was validated through ROC curves and PCA analyses between the two subgroups in the TCGA and GEO sets. To understand whether this newly identified RBP signature can specifically predict prognosis of cutaneous melanoma or has general prognostic value for other cancers, we evaluated prognostic value of the RBP signature in 32 cancer types using the TCGA pan-cancer data and found that the signature can also predict prognosis of other 5 types of cancer. Analyses of immune infiltrating, TME, immune checkpoint and drug susceptibility revealed significant differences between the two groups. The GSEA indicated that cancer-related processes and pathways were significantly enriched in the high-risk group defined by the RBP-related signature. Additionally, a nomogram combining 7-RBP-signature with clinical characteristics was performed to verify the robustness of the model for speculating OS in melanoma patients. The favorable predictive performance of the nomogram was validated by the discrimination and calibration curves. The prognostic signature contained 7 RBPs. Some of the RBPs were found to affect the malignant phenotypes of tumors, such as RPP25, FBXO17, RBM43, and APOBEC3G. Consistent with the results of this study, previous studies have shown that RPP25 was significantly upregulated in tissues and cell lines of cervical cancer relative to the normal tissues. RPP25 can serve as a target gene of miR-3127-5p to promote the EMT process in cervical cancer (Yang et al., 2020). FBXO17, a negative regulator of glycogensynthase kinase-3β (GSK-3β), was identified by polyubiquitination and targeting of kinases to proteasomal degradation (Suber et al., 2017). FBXO17 was found to be upregulated in tumor tissues and promote malignant progression of cancer through different mechanisms, such as activation of Akt (Suber et al., 2018), or wnt/β-catenin pathway (Liu et al., 2019). Patients with elevated FBXO17 have a worse prognosis in multiple cancers, such as high-grade glioma (Du et al., 2018) and hepatocellular carcinoma (Liu et al., 2019). RNA-binding motif protein 43 (RBM43) was reported to be a tumor suppressor and correlated with poor prognosis in liver cancer. The overexpression of RBM43 can inhibit the proliferation of hepatocellular carcinoma cells, and decreased the growth of transplanted tumors in vivo through modulation of cyclin B1 expression (Feng et al., 2020). APOBEC3G has been reported to be dysregulated in tumor tissues and is associated with the prognosis of multiple cancers (Leonard et al., 2016;Han et al., 2020). There is an obvious correlation between APOBEC3G and tumor-infiltrating immune cells (Leonard et al., 2016;Han et al., 2020). There is growing evidence that the TME, in which immune cells and molecules are important components, acts an important role in tumor development and the degree of immune cell infiltration is highly correlated with patient prognosis (Seager et al., 2017). The typical structure of the TME is composed of stromal components, endotheliocyte, mesenchymal stem cells, tumor-associated fibroblast and pericyte included, and immunocytes (Turley et al., 2015). With the recent development of technologies such as RNA-seq, it is possible to systematically analyze the TME and the functional diversity of tumor-infiltrating immune cells, the sensitivity of patients to immunotherapy, and the prognosis . Melanoma is one of the most immunogenic tumors because it has an incredibly high genomic mutation load and is most likely to trigger a specific adaptive anti-tumor immune response. Therefore, it has the greatest potential for response to immunotherapy (Marzagalli et al., 2019). In this research, we first explored the relationship between the RBP signature and tumor-infiltrating immune cells. We discovered that the relative contents of B cells memory, M1 macrophages, Monocytes, activated NK cells, Plasma cells, activated T cells CD4 memory, CD8 T cells, and 13 immune-related pathways were negatively correlated with the risk score. We further conducted correlational analysis for the signature and the expression of tumor immune checkpoint genes and noticed that with the risk score was significant with the expression of the checkpoint markers, such as PD-1, PD-L1, and CTLA-4, implicating the potential roles of the signature in the response to immunotherapy in cutaneous melanoma patients. Recently, the study of immune checkpoint therapy targeting PD-1 and CTLA-4 was blooming. The immunotherapies aiming at PD-1 and CTLA-4 have been widespread applied for melanoma (Specenier, 2016;Franklin et al., 2017). PD-1 is an important checkpoint receptor on the surface of T cells, and PD-1 combined with its agonist PD-L1 can also inhibit T cell activation. PD-L1 is expressed by melanoma cells or tumor-associated stroma, and this expression is closely related to the efficacy of anti-PD-1 immunotherapy (Taube et al., 2014). Several anti-PD-1 antibodies, such as nivolumab, ipilimumab, and pembrolizumab, have been approved for the treatment of melanoma (Franklin et al., 2017). Moreover, we further evaluated the association between the signature and TME. The result indicated that the signature was negatively associated with an immune score, estimate score, and ESTIMATE score might indicate that highrisk score inhibits immunoreaction to promote the progression of melanoma cells. This study, for the first time, established a prognostic model based on RBPs, which could be a good tool for predicting the prognosis of cutaneous melanoma patients. Nevertheless, we have to admit that some limitations were also existing in our study. First, the results of this retrospective study based on bioinformatics analysis might exist a bit of bias, the prediction accuracy of the model needs to be further confirmed using prospective multicenter randomized controlled trials. The validation in cellular experiments, and animal and tissue models warrant further investigation. Second, the information from the TCGA database is limited and incomplete, which may reduce the predictive accuracy of the model. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS JT and YZ made substantial contributions to the conception, design, interpretation, and preparation of the final manuscript. JT, CM, LY, and YS participated in the coordination of data acquisition and data analysis, and reviewed the manuscript. All authors contributed to the article and approved the submitted version.
2021-09-04T13:33:26.896Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "e15d200efb14e38e5cc81a6581951f85e23ba6e5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.723796/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e15d200efb14e38e5cc81a6581951f85e23ba6e5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251077226
pes2o/s2orc
v3-fos-license
Actively Respond to the Opportunities and Challenges of Ideological and Political Education in Colleges and Universities in the Network Era : At present, the network era quietly comes, brings different life experience to people. As the high-end frontier of China’s social network development, colleges and universities are not only the direct experiencers and promoters of the information wave, but also the guiding units of stable ideology. The moral quality of college students is inseparable from the cultivation of colleges and the guidance of teachers. Throughout the characteristics of the network age, the impact of the application of network technology on ideological and political education in colleges and universities has become a major research topic of concern to the general public. With the rapid development of information technology in the 21st century, great changes have taken place in the educational mechanism and humanistic environment of Chinese universities, among which the college students are the most affected by the Internet. The Internet has many influences on students’ values and behavior patterns. The coordination of university moralism and political structure causes unnecessary restrictions to students and makes them unable to learn ideological and political courses well. This is a new challenge and a new opportunity for college education, which needs to be paid attention to by college leaders and teachers. Ideological and political education opportunities in colleges and universities In the Internet information age, it is necessary to make a full and objective study on the new opportunities and challenges faced by ideological and political education and teaching in colleges and universities, which is reflected in the following different research fields. First, it has enriched the educational information of colleges and universities. In the era of knowledge economy, all kinds of Internet information can be found in universities and public places. In the process of ideological and moral teaching in China's higher education, learners are easily influenced by such Internet information. Moreover, since information resources on the Internet can be shared, learners can use the Internet to obtain any information needed to meet their personal development needs and to cultivate their academic performance. Ability to teach independently. At the same time, due to network space for higher school ideological and political education workers provide a lot of guidance information, ideological education workers can also according to their own needs, according to the reference value for selecting information, to the learners as the center, to carry out ideological and political education teaching, so as to improve the level of the ideological and political teaching, improve the education performance. In addition, it can also expand the scope of ideological and political teaching in colleges and universities to a certain extent, and further optimize the form of ideological and political teaching, so as to carry out the management of ideological and political teaching in schools more safely [1] . Second, it is beneficial to comprehensively promote the socialization and development of ideological teaching. The sustainable development of network information technology is an important key to the socialization of ideological and political teaching in colleges and universities, which eliminates the estrangement between colleges and communities and makes various effective adjustment measures in the community environment. For the social development goal of ideological and political teaching in colleges and universities, the personalized support for learning experience and putting forward development ideas can ensure the smooth realization, promote the socialization development of educational activities, and lay a certain foundation for the sustainable development of college education. Help students establish correct concepts In the information Internet development period, the network has already become the main software indispensable in our work, life and study. In order to enhance the effectiveness and effectiveness of ideological and political teaching in schools, educators should make use of the Internet to carry out political education. The head teacher should cultivate students' correct values, understand the Internet, correctly identify the various forms of information in the Internet, and optimize the working conditions of using teachers for ideological and political teaching through optimization and integration. Provide students with the difficulties they meet on the road of life, cultivate students' correct values, so as to form the correct moral outlook of life [3] . Optimize teaching methods In the process of ideological and political education, teachers can choose different teaching modes and use multimedia and other auxiliary teaching tools to enhance classroom awareness and guide students to better master ideological and political courses. Or with the help of network resources, master the latest social information, guide students to correctly understand, manage and use these online information, so as to develop social practical ability, can also continue to improve. Through the network question and answer and online conversation and other different forms, effectively understand each student in the class to understand the thought, so as to optimize the school's ideological and political classroom teaching, and reform the teaching mode of ideological and political class. In this process, ideological and political workers can through a large number of online information collection, eliminate resources matching with the psychological and age characteristics of college students, constantly expand students' horizons, improve the existing knowledge structure system, do ideological and political work needs to do a good job of education. Use information network to improve the image of ideological and political education Because multimedia technology itself has the characteristics of audio-visual fusion and pictures and words, using these characteristics of multimedia to carry out ideological and political education, students in the ideological and political characteristics while receiving educational knowledge, can improve the enthusiasm and enthusiasm of learning. For example, students can use film and TV clips to enhance their historical experience and achieve a truly immersive learning experience. For example, some typical cases can be used to teach the corresponding MAO Zedong theory, so that students can intuitively feel what is happening in real life. Therefore, it is very necessary to use network information to carry on curriculum education. Conclusion: Generally speaking, the trend of combining ideological and political education with network education is inevitable. In order to make more effective use of online education, educators can understand the drawbacks of online education, crack down on some websites that are not helpful to students' learning, improve the educational effect of ideological and political education, and combine with and improve online education, traditional education, more excellent talents.
2022-07-27T15:02:11.118Z
2022-06-20T00:00:00.000
{ "year": 2022, "sha1": "00daff0c4edd9726f9630e7d882e21945a795001", "oa_license": "CCBYNC", "oa_url": "https://ojs.piscomed.com/index.php/L-E/article/download/3122/2935", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f9b7a512bcda626710ef87a42c5e406259a9bec6", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
260675260
pes2o/s2orc
v3-fos-license
Monitoring and prediction of landslide-related deformation based on the GCN-LSTM algorithm and SAR imagery A key component of disaster management and infrastructure organization is predicting cumulative deformations caused by landslides. One of the critical points in predicting deformation is to consider the spatio-temporal relationships and interdependencies between the features, such as geological, geomorphological, and geospatial factors (predisposing factors). Using algorithms that create temporal and spatial connections is suggested in this study to address this important point. This study proposes a modified graph convolutional network (GCN) that incorporates a long and short-term memory (LSTM) network (GCN-LSTM) and applies it to the Moio della Civitella landslides (southern Italy) for predicting cumulative deformation. In our proposed deep learning algorithms (DLAs), two types of data are considered, the first is geological, geomorphological, and geospatial information, and the second is cumulative deformations obtained by permanent scatterer interferometry (PSI), with the first investigated as features and the second as labels and goals. This approach is divided into two processing strategies where: (a) Firstly, extracting the spatial interdependency between paired data points using the GCN regression model applied to velocity obtained by PSI and data depicting controlling predisposing factors; (b) secondly, the application of the GCN-LSTM model to predict cumulative landslide deformation (labels of DLAs) based on the correlation distance obtained through the first strategy and determination of spatio-temporal dependency. A comparative assessment of model performance illustrates that GCN-LSTM is superior and outperforms four different DLAs, including recurrent neural networks (RNNs), gated recurrent units (GRU), LSTM, and GCN-GRU. The absolute error between the real and predicted deformation is applied for validation, and in 92% of the data points, this error is lower than 4 mm. Introduction Landslides are among the world's most common geological hazards, which can be caused by heavy rainfall, earthquakes, snowstorms, and human activities such as deforestation, construction, and excavation (Keefer 1984;Iverson 2000;Bozzano et al. 2004;Guerriero et al. 2021;Picarelli et al. 2022). In urban areas, landslides can be devastating due to the high concentration of structures and people, leading to significant property damage, injury, and loss of life and disrupting essential services such as transportation and communication (Miele et al. 2021;Liang et al. 2022). Landslides in urban areas can be reduced by assessing slope stability properly, regulating development activities, and implementing comprehensive early warning systems (Bozzano et al. 2011;Gao et al. 2022;Tsironi et al. 2022). Also, one of the essential factors that can be used to reduce landslide risks is proposed as this paper's purpose is the accurate and reliable prediction of the cumulated deformation in urban areas (Chen et al. 2017;Confuorto et al. 2022). The southern Italian Apennine is among the sites exhibiting the highest density of landslides globally (Guerriero et al. 2019;Di Carlo et al. 2021). For example, the town of Moio della Civitella (Salerno Province) has continuously damaged its urban settlement (Di Martire et al. 2015;Infante et al. 2019). Predicting cumulative deformation landslides is challenging because it involves transferring historical slide information in time. Moreover, the slow movement under settlement cover further complicates the task. The presence of human intervention can also prevent the formation and identification of the typical morphologies of an evolving landslide. It is important to track landslides affecting settlements to identify landslide areas and understand their evolution accurately. Assessment of future activities can be aided by this information. Mapping, predicting, and classifying geomorphological and geospatial data may contribute to landslide formation and occurrence (Del Soldato et al. 2017;Rosi et al. 2018). Analysis and prediction of geological hazards using data from various sources serve as an important basis for mitigating these intensive effects caused by landslides, such as loss of life, property damage, the destruction of property and infrastructure, environmental degradation, and deformation of communities (Dai et al. 2002;Gutiérrez et al. 2008;Kjekstad and Highland 2009;Lacasse et al. 2009). The deformation of landslides over time is an effective dataset for understanding the characteristics of landslides and predicting their future development (Jiang et al. 2021). Using satellite remote sensing, urban landslides can be identified and monitored at high spatial resolutions (Scaioni et al. 2014;Nolesini et al. 2016;Amitrano et al. 2021;Khalili et al. 2023d). By allowing limited deformation rates to be recorded and operated over broad areas, costs and computation times can be minimized (Tofani et al. 2013;Di Traglia et al. 2021;Macchiarulo et al. 2022). Synthetic aperture radar (SAR) imagerybased methods (Solari et al. 2020) have been widely applied in this context, providing multi-temporal deformation rate distribution maps that support landslide identification under settlement cover and both retrospective and operational monitoring (Foumelis et al. 2016). Permanent scatterer interferometry (PSI) (Ferretti et al. 2001) is a SAR image processing technique that measures ground movement over time with high accuracy. It uses stable points (PSs) in SAR images as reference points and provides accurate and long-term monitoring of subsidence, volcanic activity, tectonic processes, slowmoving landslides, and other causes of ground deformation (Zhou et al. 2009;Lu et al. 2012;D'Aranno et al. 2021;Khalili et al. 2023d, b). X-band imagery acquired in the COSMO-SkyMed (CSK) mission is an example of satellite products that are particularly suitable for retrieving deformation data of landslides under urban cover because of their high spatial resolution and short revisiting time of acquisition (Costantini et al. 2017;Di Martire et al. 2017;Khalili et al. 2023a). Therefore, it can measure the surface deformation at high temporal resolution and with high accuracy and be used to diagnose the progression of landslide movement (Confuorto et al. 2022). Hence, after processing the CSK images using the PSI technique, a valuable dataset is generated for training the DLAs as labels to predict the cumulated surface deformations over time. Various types of MLA have been implemented for accurate and timely landslide prediction (Zhou et al. 2017;Gan et al. 2019). It also consists of Bayesian networks (Chen et al. 2017), logistic regression (Wang et al. 2017), decision trees, random forests (Hong et al. 2016), and support vector machines (Liu et al. 2021), which have been widely used to capture the occurrence of landslides. On the one hand, DLAs are highly recommended to outperform conventional statistical models for most applications of time-series prediction (Confuorto et al. 2022). Multivariate regression models (Krkač et al. 2020), auto-regressive integrated moving average models (Zhang 2003), as well as other traditional approaches can be used to forecast individual time series in a wide range of applications (Zhang 2003;Xu and Niu 2018), however, are not able to describe the behavior of multivariate time series. On the other hand, by integrating multiple processing layers, datasets with multiple dimensions can be analyzed using DLAs to extract learning features and nonlinear dependencies (Li et al. 2020;Ma and Mei 2021). It has been shown that DLAs can be used as a model for predicting deformation due to recent advances in this field (Jiang and Chen 2016;Hajimoradlou et al. 2020;Hua et al. 2021). Two famous types of DLAs include RNNs and convolutional neural networks (CNNs). RNNs, including LSTM and GRU, are neural network architectures commonly used for sequential data processing tasks such as time-series prediction. They pass information from one time step to the next, allowing the network to maintain a memory of past inputs. In contrast, CNNs are designed to work by applying filters to local patches of the input data. While RNNs are suited for tasks where past inputs influence future predictions, CNNs are ideal for tasks where spatial relationships between input data are important. These DLAs illustrate the promise and have the potential to enhance significantly landslide prediction, providing valuable information for disaster risk management (Azarafza et al. 2021;Dao et al. 2020;Habumugisha et al. 2022;Huang et al. 2020;Orland et al. 2020;Saha et al. 2022;Shirzadi et al. 2018). Another type of DLAs is GNNs, a class of neural network algorithms designed to operate on graph-structured data. GNNs typically consist of multiple layers of computations; each aggregates information from neighboring nodes in the graph and updates the node representations. Numerous applications in this field have been demonstrated to be efficient and effective using GNNs, including traffic forecasting, human gesture detection, and urban flow prediction (Yu et al. 2018;Huang et al. 2020;Wang et al. 2020). GCNs are a type of GNNs that use graph convolutions to propagate information across nodes and edges (Khalili et al. 2023c). In contrast with traditional CNNs that operate on regular grid-like data, GCNs generalize the convolution operation to graph-structured data. GCNs can learn representations of nodes in a graph considering the graph structure. This enables them to capture complex relationships between nodes and make predictions about them in a graph. In predicting cumulative deformation caused by landslides, GNNs model the interactions between predisposing factors such as geological, geomorphological, and geospatial factors. By taking into account the spatial relationships between these factors, GNNs can make more accurate predictions of deformation compared to traditional MLAs and DLAs that only consider single-node feature information (Kuang et al. 2022;Zhou et al. 2021;Zeng et al. 2022). 3 In this paper, an advanced and synergic version of GNNs named GCN-LSTM is applied to a time series of cumulative landslide deformations and predisposing factors to predict the future cumulative deformation in the study area, considering spatio-temporal dependencies between features. The proposed modeling strategy was divided into two parts, where a single GCN was implemented to detect the spatial interdependency between paired data points in the initial timeframe. Then, consider GCN-LSTM to detect spatio-temporal dependency between paired data points and predict cumulative deformation caused by landslides using the time series of PS points as labels and predisposing factors as features. After comparing with different DLAs, the presented algorithm (GCN-LSTM) outperforms all the traditional DLAs. The following sections will describe the study area in greater detail, and the datasets utilized for developing the suggested algorithm will be analyzed and discussed. Afterward, the outcomes and conclusions will be presented. Case study The Moio della Civitella landslides are located in Cilento, Vallo di Diano, and Alburni National Parks of the Salerno Province of southern Italy. They involve the slope along with the Moio della Civitella village is built from 600 to 200 m a.s.l., which is characterized by typical hilly morphology and low gradient. The landslides involve the Crete Nere of the Saraceno Formation, cropping out in this sector of the Apennine Mountains. This formation is mainly represented by argillites with carbonate intercalations and weathered siliciclastic arenites. The structural characteristics of the landslide area are similar to those of the southern sector of the Italian Apennines, with diffuse and pervasive discontinuities and extremely variable bedding characterizing the described rocks. The Saraceno Formation is locally overlaid by Quaternary rocks comprising heterogeneous debris encased in the siltyclayey matrix (Di Martire et al. 2015). The heterogeneity in lithology and the consequent complex hydrogeological behavior of rocks forming the slope contribute to the instability at Moio della Civitella. Following (Cruden and Varnes 1996), such landslides are flows, rotational and translational slides (Fig. 1). The main slope instabilities were believed to be the result of ancient landslides affecting most of the slope. According to the map in Fig. 1, most of the identified landslides directly impacted settlements, including lifelines and significant communication routes (Infante et al. 2019;Miano et al. 2021;Mele et al. 2022). As a result of the presence of such landslides, the area of Moio della Civitella has been extensively investigated using topographic measurements, inclinometers, and GPS networks (Di Martire et al. 2015). Such investigation indicated that landslides actively moved between nearly − 2 cm and + 1.5 cm. SAR data SAR data have been widely used in the research of landslides due to their wide range of applications, high spatial and, in some cases, temporal resolution, and their ability to work under any weather conditions (Herrera et al. 2011;Scaioni et al. 2014;Sellers et al. 2023). The objective of this study was to process the X-band imagery from the CSK missions for monitoring landslide-related surface deformation, which was processed by the PSI technique to find labels for the algorithms suggested in this study. Due to their high spatial resolution and short revisiting periods, these satellite products are particularly suitable for determining the location of landslides in urban areas. As part of analyzing COSMO-SkyMed image stacks, 66 descending images were analyzed during 2012-2016 (Infante et al. 2019), an excellent starting point for being used as the first step of the proposed algorithms (GCN) in this study. Furthermore, for the period 2015-2019, 65 descending images have been collected (Mele et al. 2022), which can be applied to implementing several DLAs, including RNN, GRU, and LSTM, and also predict cumulative deformation using the proposed model (GCN-LSTM) in the second step. As a result of the analysis, maps of mean deformation rates (Fig. 2) and time series of deformations have been generated. As shown in Table 1, an overview of the images acquired throughout the experiment can be found. Predisposing factors Geological, geomorphological, and geospatial data were used as predisposing factors. These include elevation, slope, aspect, Topographic Wetness Index (TWI), Stream Power Index (SPI), geology, flow direction, total curvature, plan curvature, profile curvature, and also geospatial data such as Normalized Difference Vegetation Index (NDVI) and land use to contribute to the formation of landslides in this case study (Chen et al. 2018Achour et al. 2018). To learn and train the GCN models, relationships and connections have been created between the predisposing factors mentioned above and briefly discussed below practically. The primary geological, geospatial, and geomorphological data used in this study include: i) a 1:50,000 geological map of Moio della Civitella, which is used to study the geological background; ii) a 10-m pixel resolution Digital Elevation Map (DEM) of the case study mainly utilized to investigate the topographic and geomorphological features of Moio della Civitella and obtain elevation, slope, flow direction, aspect, total, plan, and profile curvature, and also Topographic Wetness Index (TWI), and Stream Power Index (SPI); iii) Landsat7 ETM + remote sensing images with a resolution of 30 m for bands 1-7 (Time: from 2012 to 2015, PATH: 188, ROW: 032), mainly used to discuss the climatic and environmental characteristics of Moio della Civitella and acquire the Normalized Differential Vegetation Index (NDVI), and land-use type. The TWI is used in hydrological analysis to measure water accumulation in an area. It indicates steady-state moisture and quantifies the effect of topography on hydrological processes. The TWI is based on the slope and upstream contributing area width and was designed for hillslope catenas. It has been found to be correlated with several soil attributes, including horizon depth, silt percentage, organic matter content, and phosphorus content. The TWI is calculated differently based on how the upstream contributing area is determined. It is not applicable in flat areas with vast water accumulations (Novellino et al. 2021). A slope's strength is significantly affected by the scouring and infiltration of flowing water. SPI is a parameter that measures stream power and erosion power of flowing water. SPI estimates streams' capacity to potentially modify an area's geomorphology through gully erosion and transportation. To determine the erosive power of flowing water, SPI considers the relationship between discharge and a specific catchment area . Curvature analysis provides information regarding these causative sources' location, depth, dip direction, and magnetic susceptibility. A quadratic surface is applied within a standard moving window of 3 × 3 in curvature analysis. The total curvature is converted into the profile and plan curvatures, profile curvature along the maximum slope direction, and plan curvature perpendicular to the maximum slope direction. Each section of the curvature space provides crucial information to determine whether the source is 2D or 3D. This is an essential consideration in remote field mapping since it allows the interpreter to distinguish between a contact, a dyke, and extensive lithology without observing directly ). The direction a slope faces, known as the aspect, can influence the distribution and flow of water, soil, and vegetation growth. Because of these impacts, the aspect is a significant consideration in geomorphological and ecological research. Typically, the aspect is measured with a compass, with angles ranging from 0 to 360 degrees, where the north is represented by 0/360, east by 90, south by 180, and west by 270. Including aspect data in geological analysis can enhance our understanding of geological processes and their interactions (Pyrcz and Deutsch 2014). The Normalized Difference Vegetation Index (NDVI) measures vegetation's near-infrared reflectance and absorption of red light. There is a range of − 1-+ 1 in the NDVI. There is, however, no clear distinction between each type of land cover. In the case of negative NDVI values, it may be dealing with water; however, when the NDVI value is close to + 1, it is more likely that it is dense green foliage. It may even be an urban area when the NDVI is close to zero, as no green leaves are present. A high NDVI value indicates that the vegetation is healthier than when the NDVI value is less; it is a standardized method of assessing vegetation health. A low NDVI indicates a lack of vegetation (Ammirati et al. 2022). A geological map illustrates the distribution of lithologies on the surface of the Earth. Geological maps show the distribution of different types of rock and deposits and the location of geological structures such as faults and folds. In general, rock types and unconsolidated materials are depicted in different colors according to their types. Data that have been manually collected are displayed on a geological map (Di Napoli et al. 2022). The hydrological characteristics of a surface can be determined by determining the flow direction from every raster pixel. In this function, a surface is taken as an input, and a raster is created to show the flow direction between each pixel and its steepest downslope neighbor (Di Napoli et al. 2020b). With a spatial resolution ranging from 30 by 30 m to three arc seconds (approximately 90 by 90 m), the Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM) covers about 80% of the globe between 60° north and 56° south. There are a variety of formats in which elevation data are available, and they are continually being generated (Di Napoli et al. 2020a). This study uses a blend of geological, geospatial, and geomorphological features alongside the proposed algorithm (GCN-LSTM), implicating a critical data categorization phase (Soares et al. 2022;Nasir et al. 2022). This categorization is executed during the pre-processing of features in QGIS software. We acknowledge that discretizing continuous variables may lead to a loss of information due to the introduced artificial strata, but this is an accepted trade-off given our methodology. The decision to categorize our continuous variables stems from the nature of the proposed model and the specific characteristics of our geospatial data. Models such as GCN and LSTM capture nonlinear and complex dependencies more effectively when dealing with categorized features (Cui et al. 2020;Chen et al. 2022). The categorized features can better represent our geospatial data's complex, multi-dimensional relationships, contributing to better model interpretability (Kshetrimayum et al. 2023). Geospatial data are naturally complex and multi-dimensional and describe objects or events with a location on or near the Earth's surface. Representing such data efficiently requires effective categorization techniques, dealing with a mix of numerical and categorical data. Categorization permits sophisticated data analysis methods such as geospatial analytics, MLAs, and DLAs. These methods can uncover patterns and relationships that might be too complex to understand through raw data alone. After categorization, data normalization is the subsequent step in our code implementation. Normalization scales the data with a mean of 0 and a standard deviation of 1, ensuring each feature contributes equally to the analysis and preventing bias toward certain features. For deep learning models such as GCNs and LSTMs, normalizing inputs can improve model convergence during training and reduce the risk of vanishing or exploding gradients. We used min-max normalization to scale all features to the range [0, 1] (Ioffe and Szegedy 2015;Borkin et al. 2019). We concur with the potential concerns about normalizing inherently categorical variables but believe that it was essential due to the architecture of the GCN-LSTM model. Features with larger magnitudes may disproportionately influence the learning process; hence, normalization ensures that all features are on a similar scale and contribute equally to the model's learning. Our normalization process also respects the inherent properties of our categorical variables. For instance, we took special consideration with the "Aspect" feature, a circular variable normalized using a method that preserved its circular nature. Traditional normalization techniques that scale linear variables to a consistent range are unsuitable for circular variables as they distort the relationships in the data. Therefore, we converted this circular variable into two variables using sine and cosine transformations, thus preserving the circular proximity of the data points. This transformation ensures that the circular nature of the "Aspect" feature is preserved during normalization, allowing accurate representation for subsequent analysis by the proposed algorithm (Goodfellow et al. 2016). Hence, the combined use of feature categorization and normalization techniques ensures that they are on a similar scale and that the GCN-LSTM model can learn from them without any bias toward larger magnitude features. This approach is critical for geospatial and remote sensing data with significant noise and variability. By using these methods, we can create more relevant or informative features, reduce the input data size, and potentially improve the computational efficiency of our proposed deep learning algorithm. Methodology A spatio-temporal prediction strategy is required to forecast cumulative deformation caused by landslides because the evolution of landslide movement often reveals the spatial and temporal characteristics of the movement. The first step of our strategy (Fig. 3) involves pre-processing to obtain spatial attributes obtained with the PSI processing technique (velocity) and the time-series datasets as inputs to the second step. The GCN module captured the spatial-dependent variables, while the LSTM module captured the temporaldependent variables. It is necessary to prepare the best interval of correlation distance in the first step of the analysis and subsequently predict cumulative deformation by the proposed model (GCN-LSTM) and discuss and compare the obtained results with other outputs of DLAs (simple RNNs, GRU, and LSTM which work based on temporal dependency) and another algorithm which work based on spatio-temporal dependency named GCN-GRU in the second phase of implementation. SAR data processing A satellite-based differential interferometry technique using aperture radar (DInSAR) (Gabriel et al. 1989) has proven to be helpful in detecting ground movements caused by subsidence, landslides, earthquakes, and volcanic activity, as well as monitoring structures and infrastructure (Di Martire et al. 2014). However, this DInSAR method is susceptible to issues such as temporal and spatial decorrelation, signal delays due to atmospheric conditions, and errors in orbit or topography (Hooper et al. 2004a). Over time, improvements in these techniques have helped to mitigate some of these limitations, one of which is PSI. With the application of advanced DInSAR techniques, the accuracy of the rate maps and time series of deformations has been enhanced to around 1-2 mm/year and 5-10 mm, respectively (Manzo et al. 2012;Calò et al. 2014;Pulvirenti et al. 2014;Xing et al. 2019). The PSI technique uses radar targets known as permanent scatterers to obtain high interferometric coherence (Hooper et al. 2004b;Hooper 2008). This is achieved by eliminating geometric and temporal interferometric effects due to the high stability of the targets over time. The Digital Elevation Model (DEM) used in this technique has a cell resolution of 3m × 3m and a multi-looking factor of 3 × 3 in range and azimuth. The coherent pixels technique (Blanco-Sànchez et al. 2008), implemented in the SUBSIDENCE software developed at Universitat Politecnica de Catalunya, was used to apply the PSI method. The software processed the co-registered images and selected all possible interferogram pairs with spatial baselines lower than 300 m, using a temporal phase coherence threshold of 0.7. Finally, the deformation rate map along the line of sight (LoS) and time series of cumulative deformation were calculated. Recurrent neural networks (RNNs) Predicting the deformation caused by landslides can be done using RNNs. In this algorithm, the RNN is trained on historical data of landslide deformation to make predictions 1 3 about future deformation. The RNNs ability to store information from previous time steps allows it to identify patterns and dependencies in the data. This information is then used to make informed predictions. The input to the network consists of geological, geospatial, and geomorphological data, while the label used for comparison with the proposed algorithm's output is the cumulative deformation caused by landslides. Finally, the model's output predicts the amount and distribution of future cumulative deformation caused by the landslide. This model is useful for disaster preparedness, risk assessment, and a deeper understanding of the processes behind landslides. RNNs are an ideal choice for temporal prediction because they are specifically designed to handle sequential data and can effectively handle time-series data of deformation over time. These references can be used to learn more about RNNs and their equations in detail (Liao et al. 2019;Sherstinsky 2020). Long short-term memory (LSTM) The LSTM algorithm is an advanced artificial neural network that deals with time-sensitive information. It is beneficial for predicting cumulative deformation in landslides as it has the ability to overcome the limitations of standard RNNs. One of these limitations is the "vanishing gradients" (The vanishing gradients problem arises because the gradients (derivatives) of the loss function concerning the weights in the network become very small as they are backpropagated through time. This makes it difficult for the network to learn and retain information from earlier time steps in the sequence) problem, which refers to the difficulty of training RNNs to model long-term dependencies in sequential data. The critical innovation in LSTMs is the use of particular neurons called LSTM cells, which have gates that control the flow of information in and out of the cell, allowing the network to store and use vital information over a more extended period. The algorithm operates by utilizing memory cells that can retain information for extended periods and control gates to manage the flow of information in and out of these cells. Training the LSTM on previous landslide deformation data can identify patterns and accurately predict future deformation. LSTMs have been demonstrated to effectively handle complex, nonlinear relationships in time-series data, making them a reliable tool for landslide prediction (Hochreiter et al. 2001;Yuan et al. 2020). Gated recurrent unit (GRU) GRUs are RNNs designed explicitly for time-series predictions. This can be applied in various scenarios, such as predicting cumulative deformation from landslides. GRUs are considered a simpler alternative to LSTM networks, another type of RNN utilized for timeseries predictions. The hidden state (the hidden state is a vector of values that captures the memory of the network across time steps. The hidden state is updated at each time step, based on the input data and the previous hidden state) of GRUs is updated based on the current input and its previous hidden state, allowing it to capture information from previous time steps in a sequence, which is then used to predict future values. The unique feature of GRUs is its use of "gates" that regulate the flow of information in and out of the hidden state, preventing the network from losing crucial information as it processes new data and making it easier to model long-term dependencies in a sequence Chung et al. 2014). GRUs would be trained on historical geological, geospatial, and geomorphological data, and measurements of PS points taken over time at a landslide site to predict cumulative deformation due to landslides. The network will then use this information to identify patterns in the data and make future deformation predictions, which can be used for making decisions to mitigate the risk posed by the landslide. LSTMs and GRUs differ in the following ways: • Compared to LSTMs, GRUs have fewer parameters and are computationally less expensive. • Controlling the flow of information between GRUs and LSTMs is accomplished by a gating mechanism. In contrast, GRUs have two gates (update and reset gates), while LSTMs have three gates (input, output, and forget gates). • GRUs do not have a separate memory cell, whereas LSTMs have one to store information for long periods. Information is instead stored in the hidden state by GRUs. • With long sequences, RNNs faced the problem of vanishing gradients. LSTMs were introduced to overcome this problem. LSTMs handle this issue better than GRUs, but GRUs do address this issue somewhat. Graph neural networks (GNNs) GNNs have proven to be effective in modeling graph data in recent years. These DLAs process data structured in graphs using an update function considering nodes' interdependence. This processing results in a feature space that provides information about the relationships between nodes in the graph. For example, GNNs can be used to analyze the connections between factors such as geology, geomorphology, and geospatial data in predicting landslide deformation. These factors are represented as nodes in a graph, with their relationships shown as edges. The GNN then processes this graph by updating the representation of each node and using the final representation to make predictions about cumulative deformation. For comparison, RNNs (such as LSTMs and GRUs) are specifically designed to process sequential data and identify dependencies between time steps (temporal dependency). Meanwhile, GNNs are optimized for processing graph-structured data and understanding node relationships (spatial dependency) (Wu et al. 2021;Behrouz and Hashemi 2022). Graph convolutional networks (GCNs) GCNs work based on the filter parameters that are passed through all nodes of a graph (Cortes et al. 2015), and they extract new input features on a graph G = ( V, E) that has a feature space x i for every node i. The feature space is a N × D matrix X where N is the number of nodes, and D is the number of features; moreover, the interdependency between each pair of nodes is represented in the form of an adjacency matrix A which is an N × N zero-one matrix represented as follows: The value A ij is either 1 or 0, depending on whether there is an edge or interdependency between nodes i and j. If there is a path from i to j, the value A ij is 1; otherwise, it is 0. In this study, correlation distance was used to create an adjacency matrix for the first modeling approach, GCN, since a zero-one adjacency matrix was considered to determine the interdependency between each pair of points. More specifically, first of all, correlation distance was computed for each pair of points within the area, then 1 was selected for the pairs with correlation distance in the interval [0, a) where a < 2 , and 0 for the rest of the elements in the adjacency matrix. Therefore, the hyperparameter "a" is obtained in hyperparameter tuning for the GCN model applied in the first step of the proposed modeling approach. Hyperparameter tuning involves adjusting the various parameters of a GCN algorithm to optimize its performance for a specific dataset. For example, in GCN, the number of layers, the number of nodes in each layer, and the type of activation function used can all affect the algorithm's performance. By adjusting these parameters, a GCN algorithm can be made to work better for a particular graph. In this study, Pearson correlation was used to determine the amount of non-Euclidean interdependency in GCN. Pearson correlation is used to measure linear dependence between two variables. By analyzing the Pearson correlation between nodes, GCN can determine the amount of non-Euclidean interdependency in the graph. The output of this learning process provides a node-level output matrix Z, where F is the number of output features for each node. Some pooling operation is required for modeling graph-level outputs (Cortes et al. 2015). The hidden state of the network layer at the time (l + 1) can be represented as follows: where H (0) = X and H (L) = Z, with L as the number of layers. Then, models can be different only in how the activation function (activation function is a mathematical function that is applied to the output of a neuron to introduce nonlinearity and enable the network to model complex relationships in the input data) f (., .) is selected. For example, the forward propagation rule can be shown as follows: where W (l) is the trainable weight matrix for the l-th layer, and (.) is a nonlinear activation function like the tanh. The presence of A in the above function means that all the feature vectors of all neighbors of a target node are summed up except the node itself, but adding an entity matrix solves this issue. Furthermore, a positive definite matrix D − 1 2 A D − 1 2 is used to normalize the adjacency matrix A to help the algorithm works smoothly. Therefore, the forward propagation equation can be designed as follows (Gordon et al. 2021): with  = A + I , where I is the identity matrix. The embedding nodes can be fed into any loss function to apply forward propagation, and a stochastic gradient descent can be implemented to train the weight parameters using a backpropagation strategy (The strategy is used to adjust the weights of the connections between neurons based on the error between the network's predicted output and the actual output). In this study, GCN is used in the first stage of our proposed modeling approach to create a regression model to predict the velocity based on twelve geological features, including elevation, slope, general curvature, NDWI, TWI, SPI, geologic map, land use, flow direction, plan curvature, and profile curvature; in addition, the velocity is obtained based on spatial and temporal landslide processing from 2012 to 2016. The main goal of this step is to identify the best adjacency matrix in the process of hyperparameters tuning, which is built upon the concept of Pearson correlation distance. The proposed algorithm (GCN-LSTM) Combining GCN with LSTM can enhance the ability to model complex relationships between nodes in graph-structured data. By doing so, both spatial and temporal information can be captured more effectively. In other words, combining GCN and LSTM provides a better approach to handling graph data's intricacies, including the relationships between nodes and changes over time. Figure 4 shows the network structure based on the GCN-LSTM model proposed in this paper. According to the model, encoders and decoders make up the main structure. Graph network encoders use multiple parallel GCN modules to extract the key features of graph networks with different time series. To resolve the long-term and short-term dependencies between the time-series and sequence data, the time-series features are passed to LSTM, where LSTM analyzes them and extracts other features from the sequence data through LSTM. After that, the encoder generates an encoded pair vector, which is then sent to the decoder, where it is decoded as the last step. A multilayer feedforward neural network is used as part of the decoding process to analyze the coding vector's features further. Afterward, the processed data are sent to the GCN network to produce the predicted values to process the data further. Depending on the nature of the data to be used, how hyperparameter tuning is employed to obtain them, and how the data will be used, the number of layers and nodes in both the GCN and LSTM parts of the model will also change. This paper implemented the GCN-LSTM as the most influential model to detect the spatio-temporal behavior of a time series of 65 cumulative landslide deformations for 4085 data points from 2012 to 2016 in the first step and 2015 to 2019 in the second step. Model hyperparameters In the initial step of the proposed algorithm, a GCN was used to determine the amount of non-Euclidean interdependency (spatial dependency) between any pair of points within the area; specifically, the velocity of landslides, obtained through spatial and temporal processing from 2012 to 2016, was used as a label in a regression model. Using twelve predisposing factors, GCN was modeled to determine the best correlation distance between data points, which was deemed the most critical hyperparameter for the prediction task in the second step. The correlation distance illustrates how each pair of data points are similar to each other based on the values of features they obtain, not the Euclidean distance between them. The optimal range of correlation distance for creating an adjacency matrix was the interval [0, 0.1) since it provided the best evaluation metrics among other scenarios after tuning hyperparameters. The more dependent the two data points were, the closer the correlation distance was to zero, so in the adjacency matrix, one was selected for the pairs with a correlation distance less than 0.1 and 0 for the rest of the pairs. This interval can thus be used to predict future cumulative landslide deformations, which was this study's second objective. Table 2 provides the optimal hyperparameters for the first modeling approach. Overall performance comparison In the second step, a mixture model consisting of GCN and LSTM and another rival, including GCN and GRU, were implemented on a time series of 65 cumulative landslide deformations for 4085 PS data points from 2015 to 2019, and they were used to predict the future deformation in the study area; furthermore, since the velocity in the first step was obtained based on spatial landslide processing from 2012 to 2016, the best interval for correlation distance in the first step was used to compute adjacency matrix for the second step. In other words, this interval was perceived as the amount of information transferring from the previous timeframe to the new one, predicting cumulative deformation caused by landslides. After hyperparameter tuning for different scenarios, two channels were selected for the GCN part, each with 20 and 12 nodes, respectively; moreover, two channels were designed for the LSTM part of the proposed model, with 400 nodes for each one. The results of hyperparameter tuning for the GCN-LSTM approach are presented in Table 3. Table 4 reports the prediction performance of all models in terms of four evaluation metrics (mean squared error (MSE), mean absolute error (MAE), root-mean-squared error (RMSE), and R-squared (R^2) (Chicco et al. 2021)). As can be seen in this table, the proposed model GCN-LSTM consistently outperforms other methods in all the evaluation metrics. This result demonstrates that long-and short-term memories are essential in predicting cumulative deformation caused by landslides in this dataset. According to the literature, the significant difference between GRU and LSTM is that GRU's bag has two gates that are reset and updated, while LSTM's bag has three gates: input, output, and forget. This means that GRU has fewer gates than LSTM, so it is less complex than LSTM because it has fewer gates; also, when the dataset is small, the GRU will be preferred; otherwise, LSTM will be used. Therefore, it confirms our conclusion since 4085 data points are used in this study. In addition, traditional DLAs such as simple RNN, GRU, and LSTM perform poorly due to their inability to detect spatial dependency between data points. In fact, by incorporating GCN into LSTM, GCN-LSTM can effectively incorporate the graph structure information into the sequential data. GCN-LSTM is generalized better to unseen data than RNNs, LSTM, and GRU, as it can model the graph structure information, which is more expressive and can help the model capture the underlying patterns in the data more effectively. Another reason is the better handling of non-Euclidian relationships, which means that GCN can effectively handle non-Euclidian dependencies between nodes in a graph, which results in a better performance than RNNs, LSTM, and GRU, which are limited to Euclidian and temporal relationships. Therefore, the GCN-LSTM model is excellent at modeling long-and short-term dependencies in time series and significantly improves the prediction performance over the traditional methods. As shown in Table 4, in the second step, the prediction task, GCN-LSTM, outperformed all the three DLAs, such as RNN models, and worked better than its spatio-temporal counterpart GCN-GRU for all evaluation metrics in the test set. Visualizations To explore and better understand how the proposed prediction model (GCN-LSTM) worked, the image on the right corresponds to the cumulative deformations on March 19, 2019 (Fig. 5). It is placed for comparison with another image corresponding to the last predicted epoch for cumulative deformations on the same date. This figure illustrates how the proposed model has been able to correctly predict both positive and negative deformation amounts and locations with a mean absolute error of less than 0.02. Also, this figure illustrates how the proposed model has performed well in areas with deformations over 2 cm (red box and blue box), areas with a mixture of positive and negative deformations (black box), and areas with deformations lower than 2 cm (white box). As can be seen in this figure, our model provides an excellent fit to the observed data, demonstrating its ability to predict cumulative deformation accurately and reliably and, thus, the risk of landslides in the studied area. The close match between the predicted and observed cumulative deformation indicates that the GCN-LSTM model effectively captures the complex relationships between the various factors contributing to cumulative deformation prediction, such as geological, geomorphological, and geospatial data types. This figure highlights the potential of the GCN-LSTM model to serve as a valuable tool for predicting cumulative deformation and the risk of landslides, which can inform decision-making and disaster response efforts. The absolute error evaluation index was considered to display the error map of predicted and real deformation values. Then, it was divided into eleven intervals to classify the estimation error with a suitable color. As shown in Fig. 6, more than 92% of all PSs points were within the range of less than 4-mm error in prediction, which shows the high accuracy of the results. It is also necessary to present the real deformation and the predicted values to evaluate the quality of the performance of the GCN-LSTM. The first step would be to prepare 25% of the total cumulative deformation epochs, which was 16, as a test set from July 22, 2018, to March 19, 2019, to determine the change and similarity trend in the deformation between the output proposed models and the real deformation in the case study. According to Fig. 7, each interval (mentioned earlier) was considered based on absolute error, and a point was randomly selected as a representative point of each interval in the case study. GCN-LSTM predictions follow the real deformation trend, indicating that the proposed model can be used to predict cumulative deformation caused by landslides in real-time. Also, based on the calculated absolute error of predicted and real deformation, eleven intervals were determined for each 4085 PSs point in the study area. Table 5 shows that 3766 points (approximately 92.2%) of the studied area have been predicted with an error of less than 4 mm, of which the majority, 2528 points, have an error of less than 2 mm. Based on Fig. 8, the results obtained from this paper using GCN-LSTM for predicting cumulative deformation caused by landslides seem promising. The absolute error of 4 mm for 92% of the points, including P1, P2, P3, and P4 graphs, and 67% of them have an error of less than 2 mm, including P1 and P2, indicating that the algorithm was able to effectively capture the underlying patterns in the data because the algorithm to recognize better the complex relationships between the nodes in the graph, which, in turn, helped in making a high level of accuracy in the prediction. The points' graph, P5, P6, P7, P8, P9, P10, and P11, represents less than 8% of the whole PSs points with more than 5-mm error. Despite this, as seen in these graphs, the trend of predicted deformation followed the real deformation trend, and the approved GCN-LSTM model worked correctly. Conclusions Considering the spatio-temporal relationship between each pair of points in a particular area (Moio della Civitella), landslide cumulative deformation forecasting was investigated based on a combined DLAs, GCN_LSTM, in this paper. By implementing non-spatial DLAs such as RNN, GRU, and LSTM, we have come up to the conclusion that cumulative landslide deformation is highly affected by spatial interdependency between data points since non-spatial implementation resulted in weaker evaluation metrics compared to the spatio-temporal approach. The proposed DLA (GCN-LSTM) effectively addresses the challenge of incorporating the non-Euclidean spatial dependency between paired points. Also, it considers the time-series characteristics of landslides by monitoring the temporal 1 3 behavior of data points over the passage of time. The core findings of this study can be summarized as follows: (1) Implementing a technical GCN regression algorithm, the interdependency between each pair of points was captured by non-Euclidean correlation distance over a 4-year timeframe from 2012 to 2016. (2) A new hyperparameter was defined in the GCN approach mentioned above to create the best adjacency matrix by tuning through all hyperparameters simultaneously. (3) The proposed hyperparameter, obtained from the GCN approach, was used to extract the interdependency between each pair of points for two spatio-temporal models, GCN- In this study, the GCN-LSTM model successfully captured both the spatial and temporal behavior of landslide datasets, providing a promising result for spatio-temporal forecasting of cumulative deformation landslides based on spatio-temporal models. The GCN-LSTM model is more effective at predicting landslides than RNN models, such as the simple RNN, the GRU, the LSTM, and another spatio-temporal algorithm, such as the GCN-GRU. In summary, our work provides a valuable contribution to the field of landslide prediction by demonstrating the effectiveness of GCN-LSTM in this type of problem. The results suggest that GCN-LSTM is a promising algorithm for predicting cumulative deformation caused by landslides and highlights the potential benefits of incorporating graph structure information into sequential data. Spatio-temporal models that incorporate various types of features have the potential to make more accurate predictions in a range of applications. Here are a few suggestions for future work with spatio-temporal models: • Incorporation of more diverse features: The performance of a spatio-temporal model can often be improved by incorporating more diverse features, such as time series of rainfall data. These features could provide additional context and help the model better to capture the underlying patterns and relationships in the data. • Advanced deep learning architectures: Developing more advanced deep learning architectures, such as transformer or attention-based models, could help improve spatio-temporal models' performances. These models could help to capture more complex relationships between the features and the target variable. • Multi-modal integration: Another avenue for improvement could be to integrate data from multiple sources, such as satellite imagery, weather data, and ground-based sensors, to gain complete understanding of the conditions leading to a particular event.
2023-08-07T15:07:45.513Z
2023-08-05T00:00:00.000
{ "year": 2023, "sha1": "b24a13d1eee0b62c01c82f4f5a7798fe911a022a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11069-023-06121-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "b574e66eb37e6c8aa085c4d26d7f5d8fe409af94", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
7369407
pes2o/s2orc
v3-fos-license
Microenvironmental arginine depletion by macrophages in vivo. Since the tumour-selective cytotoxic activity of activated macrophages in vitro can be attributed to depletion of the culture medium of L-arginine by macrophage arginase, a series of experiments was designed to determine whether such a mechanism could operate in vivo. Extracellular fluid obtained from Gullino chambers within established tumours contained high levels of arginase, no detectable arginine and high levels of ornithine. When tumours were disaggregated into single-cell suspensions, arginase was readily detected within tumour macrophages but not within malignant cells. Inflammatory ascites induced in mice by Corynebacterium parvum was rich in arginase, depleted of L-arginine and cytotoxic in vitro to L5178Y and V79 cells. High levels of arginase in the ascites fluid were associated with resistance to challenge with syngeneic L5178Y cells. Lymph collected from the cisterna chyli in rats bearing a macrophage-rich sarcoma on the small bowel contained elevated levels of arginase, was depleted of arginine and contained increased concentrations of ornithine. We conclude that in sites of macrophage infiltration there is microenvironmental arginine depletion due to the action of arginase, and that arginase release could represent an important macrophage effector mechanism against a variety of targets, including malignant cells, virus-infected cells, fungi and parasites. geneic L5178Y cells. Lymph collected from the cisterna chyli in rats bearing a macrophage-rich sarcoma on the small bowel contained elevated levels of arginase, was depleted of arginine and contained increased concentrations of ornithine. We conclude that in sites of macrophage infiltration there is microenvironmental arginine depletion due to the action of arginase, and that arginase release could represent an important macrophage effector mechanism against a variety of targets, including malignant cells, virus-infected cells, fungi and parasites. WHEN "activated" by a variety of stimuli, macrophages synthesize and release a bewildering array of biologically active macromolecules. One such product, arginase, is of interest to us since, as Kung et al. (1977) have shown, induction of this enzyme in vitro may suppress T-cell function by depleting L-arginine from the culture medium. Furthermore, the selective in vitro cytotoxic activity of zymosan or lipopolysaccharide (LPS)-activated rodent macrophages for cultured malignant cells may be due to lethal deprivation of L-arginine mediated by arginase (Currie, 1978;Currie & Basham, 1978). Since L-arginine, not an essential amino acid for normal adult animals, is present in relatively high concentrations throughout the extracellular fluid, arginasemediated arginine deprivation of target cells (malignant cells, micro-organisms, virus-infected cells or parasites) can only be envisaged, if at all, as a microenvironmental phenomenon occurring at or near the surface of the macrophage (by analogy, say, with neuromuscular transmission) or in the centre of chronic inflammatory lesions such as granulomas, macrophagerich tumours or inflammatory exudates. This communication describes a series of experiments designed to determine whether microenvironmental arginine depletion occurs in tumours and inflammatory sites. The results indicate that, in the sites of macrophage infiltration examined, there is a profound fall in the concentration of L-arginine in the extracellular fluid associated with high levels of arginase activity, and that such microenvironmental depletion could play an important role in vivo as an effector function of macrophages. MATERIALS AND METHODS Enzyme assays.-Arginase was estimated using the method of Schimke (1964) after Mn++ activation, and the results were expressed as pM urea/min/ml. Lysozyme was estimated by the lysoplate method employing appropriate species standards (Osserman & Lawlor, 1966) and the results expressed in [kg/ml. Amino acid estimations were kindly performed by Dr John Walker using a J180 (Rank-Hilger) ion-exchange chromatograph. Tumours.-A variety of rat and mouse tumours were used. The following tumours are syngeneic in inbred Hooded rats: HSN-TC, a benzpyrene-induced well-differentiated fibrosarcoma, is highly immunogenic and slow-growing and it rarely metastasizes. It was derived as a tissue-culture subline of the HSN sarcoma. The parent HSN tumour is less immunogenic and grows more rapidly in vivo. The mouse tumours used were: L5178YE, an immunogenic methylcholanthrene-induced thymic lymphoma syngeneic in DBA2 mice, and FS6 and FS29, both of which are immunogenic methylcholanthrene-induced sarcomas syngeneic in C57BL/Cbi mice. Macrophage content.-As a rough guide to the macrophage content of the various tumours studied, the method of Evans (1972) was used. This method relies on adherence to glass of macrophages in the presence of trypsin. Both sides of each ring were covered with nylon-reinforced Millipore filters (WHP 02500) with a pore size of 0-45 ,um. Tumour fragments were dissected under sterile conditions and mechanically chopped with scissors. The chambers were lightly coated on both sides with the resulting tumour "pulp" and inserted s.c. into ether-anaesthetized rats. The chambers were introduced via a transverse incision in the dorsal thoracic region and were gently eased to lie in the dorsal caudal region. The drainage tube was inserted into a subcutaneous channel (with the tip heat-sealed) and the wound closed over it. The animals were given antibiotic cover by daily i.p. injections of 20,000 u of benzyl penicillin and 20 mg of streptomycin suspended in 2 ml saline. Similar chambers were also inserted without tumour tissue into normal rats to provide a source of "normal" s.c. tissue fluid. Extracellular fluid samples were obtained when the tumours had reached a diameter of about 3-4 cm (15-20 days). Control normal fluids were obtained at the same time after chamber insertion. The drainage tube was exteriorized by reopening the original incision, the tip cut off and the fluid collected with the animals in Bollman restraining cages. Histopathological examination of the chambers within growing tumours or lying s.c. in normal animals revealed minimal host inflammatory reactions. Furthermore, the membrane surfaces of chambers within growing tumours were completely covered by, and in intimate contact with, living tumour tissue, without evidence of necrosis, fibrosis, fibrin deposition or cyst formation. Lysozyme content of the fluids obtained from s.c. chambers in normal rats was used as a check for local inflammation due to low-grade infection or reactions to the chamber materials. Elevated lysozyme levels in fluids from control chambers were frequently detected in the early stages of this study, but were less frequent in animals receiving antibiotic cover. Inflammatory ascites.-Multiple i.p. injections of Corynebacterium parvum (Coparvax, Wellcome) into mice promote the development of a macrophage-rich inflammatory ascites. DBA2 female mice were injected i.p. with 350 ,ug C. parvum in saline. Seven days later they received an additional i.p. dose of 100 jug C. parvum. The resulting cellular ascites was collected on Days 1 to 8 after the last injection. The fluid was collected into icecold tubes, centrifuged in the cold and then ultra-filtered at 40C using CT50 (Amicon) cones. The ultra-filtrate was examined for ornithine and arginine content and the fraction >50 kd was examined for arginase activity. Ascitic-fluid samples and the cells obtained from these were also examined for cytotoxic activity. The details of the assays are described below. Colony inhibition of V79 cells.-V79 Chinese hamster lung cells obtained from stock cultures by trypinization were suspended in RPM1-1640 medium plus 10% heat-inactivated foetal bovine serum at 100 cells/ml. The cells were added in lml volumes to the wells of Linbro 24-well disposable plastic culture plates and were allowed to adhere for 2 h at 37°C in 5% CO2 in humid air. Afterthe addition of the test materials in fresh medium (and controls) the plates were incubated for a further 4 days. The plates were then rinsed in PBS, fixed in methanol and stained with crystal violet (1: 2000). The number of discrete colonies in each well was counted under low-power microscopy (with a stage graticule). The results were expressed as % colony survival. Each observation was made in triplicate. Growth inhibition of L5178YE cells.-Cells of this DBA2 lymphoma were obtained from mice bearing ascitic tumour, washed, suspended in RPM1 1640 plus 10% heatinactivated calf serum, and added at 8 x 104/ ml in Iml volumes into disposable Linbro culture trays. After the addition of test materials (and controls) the cultures were incubated for 24 h at 37°C in 5% CO2 in humid air. The cellular content of each well was then counted, after careful resuspension, in a haemocytometer. The results (obtained from triplicate wells) were expressed as per cent growth. Control cultures underwent about 2 doublings and represented the 100% growth. Cell counts less than the inoculum (i.e. cytolysis) were expressed as -ve growth. Tumour macrophages.-Tumours growing s.c. in syngeneic rats or mice were removed aseptically, disaggregated with 0.1% trypsin blue plus 0.1% collagenase, and the resulting cell suspensions plated into 25cm2 disposable plastic culture flasks. After incubation at 37°C for 1 h the flasks were well washed and then exposed to 0-1% trypsin for 5 min. The tumour cells removed were transferred to another flask in fresh medium and allowed to adhere. More than 95% of the trypsinresistant cells were macrophages, as judged by morphological criteria, presence of Fc receptors, phagocytosis and the production of lysozyme. The malignant cell cultures obtained by trypsin subculture contained less than 1% macrophages as defined by these criteria. Duplicate flasks were rinsed and treated with 6% citric acid plus 1: 2000 crystal violet for 30 min and the nuclei counted as a guide to the cell content of each flask. Flasks of tumour cells or tumour macrophages were washed with PBS and then exposed to 2 ml distilled water at 4°C for 15 min. The flasks were freeze-thawed twice, the lysate centrifuged and passed through an 0-22ym millipore filter and then assayed for arginase content. Normal peritoneal-exudate macrophages from C57/BL/Cbi female mice were treated in the same way (i.e. trypsinized, plated, retrypsinized) and their arginase content examined. Arginase activity was expressed as ,umol urea/min/106 cells. Tumour lynmph.-HSNTC tumour was inoculated via laparotomy into Peyer's patches in anaesthetized syngeneic hooded rats and was allowed to grow for 3-4 weeks. Under general anaesthesia a fine cannula was then inserted into the cisterna chyli and exteriorized. The wounds were resutured and the animals allowed to recover in Bollman restraining cages with free access to food and water. Samples of cisterna chyli lymph were obtained at various times after operation. Control lymph samples were obtained from similarly treated normal rats. Samples of HSNTC tumour growing on the bowel were examined by the method of Evans (1972) and were found to contain 32-42% macrophages. In some experiments the animals had previously had their mesenteric lymph nodes excised. Lymphatic drainage of the Peyer's patches was re-established within 3 weeks before inoculation of the tumour. Gullino chambers The data are shown in Table I. Extracellular fluid obtained from chambers implanted s.c. in normal rats contained no detectable arginase, and levels of lysozyme similar to those of normal serum. Furthermore, the levels of arginine and ornithine also resembled those of normal serum. Fluids drained from chambers within the MC26, MC28 and HSN sarcomas, however, contained high concentrations of both lysozyme and arginase. Elevated levels of ornithine were found, and a total absence of detectable free L-arginine. * The levels of arginine and ornithine in the chamber fluid from normal s.c. tissues were similar to serum levels in the same rats indicating good equilibration. In the tumour-bearing rats the serum levels of these amino acids were similar to the normal levels. Histopathologically there was little evidence of polymorphonuclear leucocyte infiltration of the chamber-containing tumours, and it is concluded that the lysozyme and arginase are probably derived from the large number of macrophages resident within the tumour mass. We have been unable to detect arginase in lysates of polymorphonuclear leucocytes. Tumour macrophages Results are shown in Table II. Arginase was readily detectable in lysates oftumourderived macrophages but was undetectable in lysates of macrophage-free tumour cells. Furthermore, supernatant media from 24h cultures of such tumour macrophages also contained arginase activity, whereas supernantants from the appropriate malignant cell cultures did not. Normal peritoneal-exudate macrophages, however, when treated in a similar fashion, contained no detectable arginase activity, an observation which suggests that the tumour macrophages are in an activated state. Inflammatory ascites Ascites fluid induced by the i.p. injection of C. parvum contained very high levels of arginase activity (see Table III). Ultra- filtrates of these fluids contained barely detectable levels of free L-arginine. The arginase estimations were performed after the ultra-filtration step to avoid problems with background urea and other chromogenic materials. Since the levels of free L-arginine in these ascites fluids were well below the levels necessary to maintain the survival of mammalian cells in vitro (Eagle, 1959) and certainly below the levels necessary for the survival of malignant cells, the fluids were tested for cytotoxic activity on L5178Y lymphoma cells and on V79 Chinese hamster lung cells. The fluids were highly cytotoxic to both cell types and the cytotoxicity could be abrogated by the addition of excess free L-arginine (Table V). Furthermore, peritoneal-exudate cells (85% macrophages) obtained from these ascites fluids by centrifugation contained high levels of arginase activity, released abundant arginase when maintained in serum-free medium for 24 h, and were cytotoxic to both V79 and L5178YE cells in vitro. Groups of similar mice which had been given 2 i.p. injections of C. parvum were challenged by an i.p. injection of 105 L5178Y lymphoma cells on Days 0, 2, 6, 8, 10 and 12 after the second dose. Groups of control untreated mice were given a similar dose of lymphoma cells. The survival of these mice in days is shown in Table IV, which demonstrates prolonged survival in the groups challenged on Days 2, 6 and 8 after the second dose of C. and 6 contained abundant arginase activity and undetectable levels of Larginine. Tumour lymph The draining lymph from control rats and from rats bearing HSNTC tumour on the small intestine after 28 days of growth was collected for a period of -,1 h (24 h after insertion of the drainage cannula). Representative results obtained on such lymph samples are shown in Table VI. Lymph draining the tumour contained elevated levels of lysozyme and arginase. The arginine content was considerably lower than the control level and the concentration of ornithine increased over 3-fold. These results indicate that the presence of the tumour (containing large niumbers of macrophages) had depleted the downstream lymph of L-arginine. The elevation of ornithine levels indicates that this depletion was due to the action of arginase. The results were identical in animals whose mesenteric lymph nodes had previously been excised, indicating that these nodes play no role in generating arginase activity. DISCUSSION The release of arginase by activated macrophages in vitro with the consequent lethal effect on target cells could clearly be a tissue-culture artefact. Since the cytotoxic activity of supernatants from activated macrophages can be abrogated by the addition of excess L-arginine, a possible role for such a mechanism operating in vivo is at first sight improbable. L-arginine, the final amino acid in the urea-cycle degradation pathway, is present in the extracellular fluid in relatively high concentrations. An in-vivo role for arginine deprivation as an effector mechanism of macrophages can only be visualized as a microenvironmental phenomenon, occurring at the macrophage surface or within sites of intense macrophage infiltration. However, it has been reported (Senft, 1967) that in mice bearing extensive schistosome-egg granulomas, sites of intense macrophage infiltration, extracellular concentrations of arginine are severely subnormal throughout the body, and it has also been noted that hepatic schistosomiasis in man is associated with high levels of circulating arginase in the plasma (Khalifa et al., 1976). It is therefore conceivable that extensive granuloma formation might lead to systemic arginine depletion under conditions where the liver is unable to respond by producing more (e.g., in schistosomiasis the egg granulomas cause substantial liver damage). However, it is possible that in both mouse and man damaged hepatic cells may be responsible for the release of high levels of arginase. Arginase is present in high concentrations (i.e., higher than the surrounding normal tissues) in animnal tumours amul in granulomas (Edlbacher & Merz, 1927). In studies of skin carcinogenesis, Roberts & Frankel (1949) showed that tumours contain more arginase than normal skin. Bach & Lasnitzki (1949) examined the arginase content of mouse tumours and showed that high enzyme activity was found in slow-growing tumours, whereas rapidly growing tumours contained low activity. Thev went so far as to conclude that arginase "may be part of a defence mechanism". Since we found that arginase was readily detectable in macrophages isolated from growing tumours but not in the malignant cells, it seems likely that these earlier workers were examining hostmacrophage infiltration. The observations of Evans (1973) that tumour-derived macrophages are cytostatic in vitro could clearly be attributed to arginase production. The use of Gullino chambers to examine tissue fluid from within a growing tumour revealed that there was abundant arginase activity which was associated with high levels of lysozyme. Lysozyme, in the absence of overt granulocyte infiltration (noted histologically) is a constitutive marker for cells of the monocyte-macrophage series, and we interpret the high levels of this enzyme in tumour chamber fluids as reflecting macrophage infiltration. Serum levels of lysozyme in the tumourbearing animals were modestly elevated, as previously described (Currie & Eccles, 1976). In-vitro examination of tumour cells and macrophages indicates that the high levels of arginase in the tumour extracellular fluids are derived from macrophages resident within the tumour. Since normal rat or mouse peritoneal-exudate macrophages contained no detectable arginase activity, we conclude that the tumour macrophages are metabolically "activated" (i.e. resemble LPS or zymosan-treated cells). Not surprisingly, there were elevated levels of ornithine and no detectable free arginine in the tumour fluids. This complete absence of arginine from the fluids may not reflect the level of arginine within the tumour tissue fluid since enzymic degradation may have occurred within the chamber during the collection period. Since mammalian arginases have a high dissociation content, the presence of arginase within tumour extracellular fluid would not rule out the concomitant presence of low levels of free L-arginine. Control chambers inserted s.c. in normal rats provided fluids which contained no detectable arginase activity, normal levels of lysozyme and normal levels of L-arginine (similar to serum levels). Therefore a possible role for the chambers themselves in inducing host cellular infiltration in these experiments can be excluded. However, we had to discard several previous experiments because of high levels of lysozyme and arginase in the fluids from control chambers. These abortive experiments indicate that in an inflammatory site (induced by the chamber or by infection) there is arginase-mediated arginine depletion. Multiple i.p. injections of C. parvum in mice induced a macrophage-rich inflammatory ascites. The macrophages contained abundant arginase and when cultured in vitro continued to produce and release the enzyme. The ascitic fluids also contained high levels of arginase. When tested against various target cells this ascitic fluid was highly cytotoxic and its cytotoxicity could be abrogated by the addition of high concentrations of arginine. After the 2nd i.p. injection of C. parvum the ascitic fluid arginase levels reached a peak after 2-6 days, as did the total numbers of macrophages in the exudate. Furthermore, when such animals were challenged with 5 x 105 syngeneic lymphoma cells the peak resistance to challenge coincided with the peak arginase levels. This ascites model indicates that in sites of infiltration with activated macrophages there is likely to be microenvironmental arginine depletion of the extracellular fluid, conditions inimical to the successful implantation and proliferation of a tumour. Significant levels of arginase activity with appropriate changes in arginine and orni-thine levels were also found in the lymph draining a macrophage-rich tumour, another observation suggesting microenvironmental arginine depletion at a site of macrophage infiltration. Although macrophages resident within a growing tumour deplete the extracellular fluid of arginine, the tumours continue to grow. Furthermore, in inflammatory ascites fluid containing no detectable free L-arginine, resistance to tumour growth is only partial. The tumours all grew eventually. To proliferate in vitro the tumour cells studied all require levels of Larginine well above those detected in vivo. There are several possible explanations for this paradox. Firstly, it is possible that the methods used permitted arginine breakdown during the collection. This is no doubt true for the Gullino chamber experiments. Examining the ascites fluids from C. parvum-treated mice we attempted to minimize this problem by ultrafiltration in the cold. However, arginine breakdown may well have taken place during the collection and centrifugation. A second possibility is the rapid in-vivo reutilization of arginine from dying cells. Although the tissue fluids may have low arginine concentrations, direct cell-to-cell transport may provide sufficient arginine for survival of some tumour cells. A possible role for local arginine deprivation as an effector function of activated macrophages may not be restricted to responses to malignant cells. Macrophage-mediated suppression of lymphocyte proliferation in vitro (Kung et al., 1977) may be due to arginine breakdown by macrophage arginase. Its possible role in vivo is unclear. T cells, for example, on entry to an inflammatory site may not need to proliferate. However, animals whose macrophages are activated by, say, C. parvum may show depressed T-cell reactivity due presumably to inhibition of clonal expansion (Currie, 1976). Arginine deprivation is known to inhibit the proliferation of polyoma virus in mouse cells (Winters & Consigli, 1971) and of vaccinia virus in HeLa cells (Archard & William-son, 1971). Furthermore, some parasites such as schistosomes may rely on the host as a source of arginine (Senft, 1967). Local macrophage infiltration could conceivably deprive parasites of such arginine supplies. The administration of arginase for the treatment of tumours has been reported to be successful by several authors (Wiswell, 1951;Bach & Simon-Reuss, 1953). Bach & Swaine (1965) have reported substantial growth retardation of the Walker tumour in rats using a highly purified arginase obtained from horse liver. The major obstacles to further exploration of this approach are the very high Km of mammalian arginases and their very short half-life when injected. However, the selective cytotoxic effects of arginine deprivation previously reported (Currie & Basham, 1978) suggest that this or similar approaches to therapy may be worth further exploration. Dr Cifuentes was supported by the Fundacion Cien'tifica de la Associacion Espanola contra el Caincer.
2014-10-01T00:00:00.000Z
1979-06-01T00:00:00.000
{ "year": 1979, "sha1": "4749b02ac01bc5e78a6587bc780d2c0bfde0fa98", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2009995?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4749b02ac01bc5e78a6587bc780d2c0bfde0fa98", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235596784
pes2o/s2orc
v3-fos-license
Acceptability and Effectiveness of Artificial Intelligence Therapy for Anxiety and Depression (Youper): Longitudinal Observational Study Background: Youper is a widely used, commercially available mobile app that uses artificial intelligence therapy for the treatment of anxiety and depression. Objective: Our study examined the acceptability and effectiveness of Youper. Further, we tested the cumulative regulation hypothesis, which posits that cumulative emotion regulation successes with repeated intervention engagement will predict longer-term anxiety and depression symptom reduction. Methods: We examined data from paying Youper users (N=4517) who allowed their data to be used for research. To characterize the acceptability of Youper, we asked users to rate the app on a 5-star scale and measured retention statistics for users’ first 4 weeks of subscription. To examine effectiveness, we examined longitudinal measures of anxiety and depression symptoms. To test the cumulative regulation hypothesis, we used the proportion of successful emotion regulation attempts to predict symptom reduction. Results: Youper users rated the app highly (mean 4.36 stars, SD 0.84), and 42.66% (1927/4517) of users were retained by week 4. Symptoms decreased in the first 2 weeks of app use (anxiety: d =0.57; depression: d =0.46). Anxiety improvements were maintained in the subsequent 2 weeks, but depression symptoms increased slightly with a very small effect size ( d =0.05). A higher proportion of successful emotion regulation attempts significantly predicted greater anxiety and depression symptom reduction. Conclusions: Youper is a low-cost, completely self-guided treatment that is accessible to users who may not otherwise access mental health care. Our findings demonstrate the acceptability and effectiveness of Youper as a treatment for anxiety and depression symptoms and support continued study of Youper in a randomized clinical trial. Introduction Nearly half the people in the United States will have a mental disorder at some point during their life span [1,2], and many more will have subthreshold symptoms. The most frequent mental health conditions are anxiety and depression, jointly termed "emotional disorders," and these impact 34% and 21% of people in the United States, respectively [3]. Despite the availability of effective treatments for emotional disorders, most people in need of treatment will not receive it [4]. Researchers have found that both structural (eg, financial, availability) and attitudinal barriers (eg, desire to handle problems independently) prevent patients from seeking mental health treatment [5]. Fully automated mental health intervention apps offer the promise of overcoming these barriers. By obviating the need for a trained clinician, the cost of treatment can be reduced by orders of magnitude and can be delivered to anyone with access to the internet. Moreover, fully automated treatments provide a means to treat patients who are uncomfortable seeking help from another person. One particularly promising type of digital mental health intervention is a mobile app that can be installed on a person's mobile device. The Apple and Google Play stores organize these apps, centralizing the location where interventions can be accessed and allowing users to vet apps by reviewing their descriptions in the store and reading user reviews. Once installed on a person's phone, the mobile app medium makes it possible for people to access interventions anytime and anywhere. This opportunity has not been overlooked. Recent figures tally over 10,000 mental health apps available to consumers [6]. However, while the options for mental health treatment apps are at an all-time high, there is little research on the acceptability of available app-based treatments and whether they can actually reduce symptoms of psychopathology. In particular, mobile apps that are completely self-guided, and hence maximally accessible and scalable, are especially understudied. That said, a significant portion of the literature supports the efficacy of self-guided cognitive behavioral therapy administered via the internet, which patients may or may not be able to access on mobile phones depending on the intervention [7,8]. Thus, the potential for mobile apps to demonstrate similar efficacy is promising. The present study aimed to assess the acceptability and effectiveness of a self-guided intervention app called Youper that targets emotional disorders. Although few in number, a handful of randomized controlled trials (RCTs) examining fully self-guided mental health intervention apps have shown promise. One mobile cognitive behavioral therapy intervention similar to Youper that employs a humanlike, chatbot interface found that college students experienced significantly reduced depression symptoms over the course of 2 weeks [9]. A number of RCTs have demonstrated the efficacy of self-guided mobile treatment programs for depression, including 1 testing problem-solving therapy and cognitive training [10], 3 testing cognitive behavioral therapy interventions [11][12][13], and 1 testing acceptance-based therapy [14]. Small-scale observational studies have also shown positive results for self-guided app-based treatments for symptoms of depression and anxiety [15,16]. Although these studies are promising, the evidence on self-guided digital mental health treatment is limited by small sample sizes obtained almost exclusively via RCTs. RCTs are considered the gold standard of evidence and play an indispensable role in assessing the efficacy of medical interventions. However, real-world evidence provides a necessary complement to understanding the impact of an intervention in the context that it will be received [17]. This fact has been acknowledged by numerous government bodies including the National Institute of Health, the Food and Drug Administration, and the European Medicines Agency, who have called for real-world evidence on interventions [18][19][20]. This call arises from the recognition of external validity shortcomings in RCTs due to factors such as inclusion criteria and differences in treatment adherence [21][22][23] First, RCTs may be composed of different populations than those that naturally seek digital mental health treatment. Participants in RCTs are acquired through recruiting efforts and are then selected based on specific inclusion criteria. For example, participants in RCTs may be required to meet a minimum threshold of depression symptoms [10]. In the real world, digital mental health treatment (often distributed via smartphone app stores) is available to anyone with a smartphone and users span the range of depression symptomatology levels. Thus, the distribution of potential app users may or may not match the distribution of participants seen in the small body of existing literature on self-guided digital mental health. Further, it is plausible that participants who enroll in clinical mental health trials are more comfortable with seeking external mental health care than are users that discreetly download an app on their phone. Since targeting populations who carry stigma against seeking mental health care is an important goal and potential advantage of this technology, it is important that the populations being studied have equivalent attitudes to the population of potential users. Second, treatment adherence in RCTs may systematically differ from real-world app usage because of the different experience that a participant in a clinical trial has compared to a user who downloads an app. In clinical trials, participants are often paid money to participate. Paid participants may feel a social obligation to adhere to the treatment plan. However, in real life, where app-based treatment is a completely individual experience, it is unclear whether adherence will be equivalent. Moreover, RCTs require, at minimum, an initial contact with study coordinators and sometimes additional contacts throughout the study. As contact with a treatment provider is known to increase adherence, this initial contact could boost levels of engagement [24]. Because the degree to which one engages and adheres to a treatment is related to treatment success [7], it is critical that we supplement evidence gained from RCTs with an understanding of how treatment recipients organically experience the app in the real world. We define artificial intelligence (AI) therapy as a digital and fully automated, mobile, psychological treatment program that uses a conversational interface to deliver just-in-time adaptive interventions. The 3 key features that set AI therapy apart from traditional digital intervention approaches are (1) the use of a conversational (chatbot) interface, (2) inclusion of just-in-time interventions, and (3) adaptation and personalization. A primary goal of this study is to test whether AI therapy has potential as a viable treatment approach. An additional goal of this study is to test the theoretical model underlying the just-in-time approach, which is a critical feature of AI therapy. Although just-in-time approaches have been used to target health behaviors, such as alcohol use, smoking, and obesity, only a small number of studies have described this approach in relation to emotional disorders [25,26]. Just-in-time interventions are designed to help the user manage moment-to-moment challenges that accumulate to negatively impact broader mental health functioning and produce symptoms of psychiatric disorders. Thus, just-in-time interventions target a proximal outcome that is theorized to accumulate over time to impact a longer-term outcome. In the case of anxiety and depression symptoms, emotion dysregulation is theorized to be a proximal cause for the manifestation of symptoms as well as a target for intervention [27,28]. Consistent with this hypothesis, a prior study of a just-in-time intervention for depression showed that it had promising impacts on depressive symptoms, albeit in a sample with just 10 people [25]. Following this theoretical work, we hypothesized that users who repeatedly succeed in regulating negative emotions by engaging just-in-time digital mental health interventions will experience long-term symptom reduction through accumulation of these regulation successes. We call this the cumulative regulation hypothesis. In addition to assessing AI therapy's effectiveness for symptom reduction, the current study will test the cumulative regulation hypothesis by testing the association between the accumulation of successful emotion regulation efforts with symptom reduction over time. In this paper, we examine the acceptability and effectiveness of AI therapy as it is implemented in a smartphone app called Youper. Our study had 3 aims. First, we explored the acceptability of Youper by analyzing user ratings and retention metrics. Second, we examined effectiveness by measuring the reduction in anxiety and depression symptoms in the first month of app use. We hypothesized that users of Youper would experience a reduction in anxiety and depression symptoms during this time period. Third, we tested the cumulative regulation hypothesis by examining the longitudinal relationship between success in downregulating acute negative emotion in Youper conversations and clinical symptoms. We hypothesized that within-session emotion regulation success during Youper engagements would predict greater reductions in anxiety and depression symptoms. Finally, in exploratory analyses, we examined whether demographics, including gender and age; and clinical characteristics, including the number of self-reported diagnoses, current psychotropic medications, and concurrent therapy, could predict symptom reduction over time. Participants Participants were Youper subscribers (ie, users who paid for full access to Youper) who downloaded the app between March 4, 2020, and July 10, 2020. This time frame was selected because Youper was relatively stable during this period (ie, no significant updates or changes to the intervention were deployed during this time). Subscribers paid US $44.99 to have unlimited access to Youper's interventions for 1 year. Users who did not subscribe were only able to access the emotion regulation interventions once as part of a free sample, and therefore, were not included in this analysis. Of 5943 users who completed at least one symptom measure in the study timeframe, 76.01% (n=4517) agreed for their data to be used for research, leaving a useable sample of 4517 participants. The sample was composed of 81.62% women (n=3687), 14.15% men (n=639), and 3.43% nonbinary individuals (n=155), and the average age of participants was 28.73 years (SD 9.63). Additional participant demographics and clinical characteristics are presented in Table 1. Participants completed symptom assessments at baseline (T0; within 3 days of subscribing to Youper), 2 weeks after baseline (T1), and 4 weeks after baseline (T2). Assessments were available to users every 14 days, and the majority of users completed their assessments within 3 days of them becoming available. Participants received access to anxiety or depression symptom measures based on their responses to screening questions. Symptom measures were administered if they endorsed a history of being diagnosed with clinical anxiety or depression, or if they reported elevated anxiety or depression symptoms on 2-item screening measures. Participants could receive only an anxiety measure, only a depression measure, or both depending on their responses to the screening items. Throughout the course of the measurement period, participants engaged in emotion regulation interventions at their discretion when emotional episodes arose. Youper Intervention Youper is a novel intervention approach that aims to enhance the user's emotion regulation skills using empirically supported treatments for anxiety and depression. Although the emotion regulation strategies employed in Youper have precedent in existing treatment protocols for anxiety and depression, the adaptation of these interventions to help a user manage emotional distress at the present moment is novel. Youper's intervention is delivered via a conversational (ie, chat) interface and is entirely automated. Youper primarily uses a decision tree to select its responses to the user input. Each interaction with Youper is called a "conversation." Conversations follow a prespecified sequence (see Figure 1 for examples): identify current emotion and intensity (0%-100%), select contributing factors from a prespecified list, complete an open text entry about what is causing the current mood, complete emotion regulation skill practice for a negative mood or wellness practice for a positive mood (see Table 2), and identify current emotion and intensity (0%-100%). The goal of each conversation is to help the user learn adaptive emotion regulation skills. If the user is experiencing a negative emotion, the skill targets the current emotion. If the user is experiencing a positive emotion, the skill encourages upregulation of that emotional state. If the user is in a neutral state, the skills encourage practice of activities that promote emotionally adaptive behaviors and attentional and cognitive control. Youper primarily uses just-in-time interventions (delivered at the moment of need) to help users practice and learn skills for emotion regulation. Youper's interventions target the 3 categories of treatment mechanisms defined by the common elements framework [29]. The common elements framework provides a review of common elements across cognitive and behavioral therapies inclusive of both traditional (cognitive therapy, behavioral activation) and third-wave (acceptance and commitment therapy, dialectical behavior therapy) approaches. They identify 3 mechanistic targets common to multiple effective therapies, including attention change (improving attentional focus and flexibility), cognitive change (improving ability to change perspective on an event), and context engagement (engaging new internal and external contexts to counteract maladaptive patterns). Youper's interventions aim to increase emotion regulation skills by targeting these common elements. For example, Youper includes interventions to increase attentional control such as mindfulness, cognitive change such as cognitive restructuring or gratitude journaling, and context engagement via behavioral activation exercises. The common elements framework was used to guide the development of Youper's interventions due to the extensive empirical support for the efficacy of each of these targets in enhancing emotion regulation and reducing symptoms of emotional disorders [30][31][32][33][34][35][36]. Each intervention is described in Table 2. Each skill follows a series of steps modeled after existing treatment manuals or research protocols. Skills practice includes a variety of formats including open-text entry following a prompt, graphical user interfaces, written content delivered via the chat, and audio. Example interaction with Youper. Users start by reporting a discrete emotion (A) and the intensity (B) which they feel the emotion. They then report which factors contributed to the emotion (C) and describe the precipitating event (D). Next, they proceed through a randomly selected intervention (eg, E or F) from the list (see Table 2). Finally, they report their discrete emotional state again and the intensity which they feel that emotion (A and B). User Ratings To assess acceptability of the Youper intervention, we asked users to provide a rating of the app using a 5-star scale. Users were given the following prompt: "I'd love to know how our journey together is going so far." Users then provided their rating of Youper by selecting a number of stars ranging from 1 to 5. Users then were asked to provide feedback using an open text box. Retention Retention was measured as the proportion of Youper subscribers who engaged with the app during week 1, 2, 3, and 4 after subscription, as well as the average number of conversations that users had during each of these weeks. Anxiety and Depression Symptoms Anxiety symptoms were measured using the 7-item generalized anxiety disorder measure (GAD-7) [37]. The GAD-7 is a widely used measure of generalized anxiety disorder symptom severity and is frequently used as a general measure of overall anxiety symptoms. It has demonstrated excellent psychometrics with a Cronbach α of .92 and a sensitivity and specificity of 89% and 82%, respectively, for classifying generalized anxiety [37]. Depression symptoms were measured using a modified version of the Patient Health Questionnaire-9 (PHQ-9) with the suicide-related item removed and 1 of the items divided into 2 separate items [38]. Specifically, the item that asks if the respondent "has been moving slowly or has been fidgety and restless" was divided into an item about "moving slowly" and another item about "being fidgety and restless." The PHQ-9 is a widely used measure of depression symptom severity with excellent psychometrics. The PHQ-9 has a Cronbach α of .89 and has both a sensitivity and a specificity of 88% for classifying major depression. In our slightly modified PHQ, we observed a comparable Cronbach α of .84 (95% CI 0.83-0.85) indicating good reliability. Within-Session Emotion Regulation To test the cumulative regulation hypothesis, we derived a measure of cumulative emotion regulation success. At the beginning of each Youper conversation, users selected their current emotion from a list of possible emotions as well as the intensity of that emotion (see Figure 1). Users who selected a negative emotion were also asked to report their emotion at the end of the conversation with Youper. We classified cases where users started with a negative emotion and ended with either a positive emotion or with a less intense negative emotion as a within-session emotion regulation "success." We classified cases where users reported a worsening or unchanging negative emotion as a "failure to regulate." To calculate a measure of cumulative within-session regulation success, we computed the proportion of cases classified as a success out of all conversations that started with a negative emotion. As discrete negative emotion words encode different emotional intensities, we scaled the numeric self-reported emotional intensity according to an intensity scale factor corresponding to the discrete emotion the user selected. To derive the intensity scale factor for each discrete emotion, we first obtained normative valence and arousal ratings from a database of words that have been rated on a scale of 1 to 9 by a large sample of participants [39]. Next, we subtracted a constant (C=6) from the normative valence ratings, chosen so that all negative valence words would have negative-valued ratings and positive valence words would have positive ratings. To compute the intensity scale factor for each emotion word, we took the square root of the sum of the squared valence and arousal ratings (ie, the L2 norm). This decision was premised on the assumption that emotion intensity is a composite of valence and arousal [40]. Finally, we multiplied the self-reported numeric intensity by the intensity scale factor for the given emotion to obtain a scaled emotion intensity rating that could be compared across discrete emotion categories. The scaling procedure had the effect of incorporating both the intensity of the emotion word and the self-reported numeric intensity into a single value which could be used to assess emotion regulation success pre-to postintervention. For example, without scaling, a participant that went from a rating of "75 annoyed" to "70 angry" would be erroneously classified as an instance of successful downregulation of negative emotion, despite the higher intensity imbued in the word "angry." With scaling, "75 annoyed" would translate to "-383" and "70 angry" would translate to "-482," and the increase in magnitude of negative emotion would result in a classification of failure to regulate. However, if the participant went from "75 annoyed" to "30 angry," the "30 angry" rating would be scaled to "-207," and the instance would be classified as a regulatory success. This procedure allowed us to use both the text information and numeric information in our assessment of success or failure to regulate emotions. As a check of robustness, we ran all analyses without the scaling procedure, and the results were substantively similar. To be conservative, we ultimately dichotomized these scaled scores into regulation successes and failures because, despite appearing to have a continuous measure of emotion regulation success, we were not confident that these scores truly represented precise gradations along a continuum. Demographic and Clinical Characteristics We examined both demographic and clinical characteristics as predictors of symptom reduction. Demographic characteristics included age (continuous) and gender (multinomial; man, woman, and nonbinary). Clinical characteristics included number of self-reported diagnoses (continuous), whether the user was currently taking psychotropic medication (binary), and whether the user was currently receiving psychotherapy (binary). Aim 1: Acceptability We report descriptive statistics for user retention and app ratings. Aim 2: Effectiveness To estimate symptom reduction as a function of time, we fit piecewise multilevel models in R (version 4.0.2; The R Foundation for Statistical Computing) using the package "lmerTest" (version 3.1-2) [41]. Consistent with prior work, we selected a piecewise approach to capture a typical pattern of symptom reduction observed in treatment studies where symptoms initially decrease sharply and then level out as time progresses [42][43][44][45][46][47]. In these models, we regressed the symptom outcome measure (GAD-7 score or PHQ score) onto the number of days since subscribing to the app. We selected multilevel models because our outcome measures were nested within individuals as a result of repeated measurement at multiple timepoints. Multilevel models allow for the estimation of within-subject effects. Further, when fit with maximum likelihood, multilevel models allow for the inclusion of participants with incomplete data without deletion or imputation and produces unbiased estimates for model parameters [48,49]. As per guidelines for randomized clinical trials, we conducted an intent-to-treat analysis, including all participants who had at least one assessment [50][51][52][53]. As discussed by Gupta [51], "intent-to-treat analysis avoids overoptimistic estimates of the efficacy of an intervention resulting from the removal of non-compliers by accepting that noncompliance and protocol deviations are likely to occur in actual clinical practice." We estimated the reduction of symptoms from T0 to T1 and from T1 to T2. We used a breakpoint at 14 days, as participants' second of 3 symptom measurements was available to be completed 14 days after the first measurement. Because not all participants completed assessments immediately when they were available, we chose to treat time as a continuous predictor in our analysis rather than simply grouping observations into time points at T0, T1, and T2. This approach allowed us to keep all information that we had about the time that had elapsed from baseline and was more conservative because it did not assume that a change occurring more than 14 days after baseline was occurring exactly at 14 days. The models included 2 fixed effect parameters: one which estimated the slope of symptom reduction from the start of using Youper to 14 days later, and another which estimated the slope of symptom reduction from the 14-day mark onward. Additionally, we included a random intercept term for each participant. We calculated Cohen d effect sizes by dividing the mean difference in symptom levels by the square root of the sum of the participant-level intercept variance and the residual variance [54]. Aim 3: Cumulative Regulation Hypothesis To test the cumulative regulation hypothesis (ie, whether cumulative emotion regulation success within conversations predicted subsequent psychopathology symptoms), we fit longitudinal path analysis models for each of the 2 symptom measures (GAD-7 and PHQ) in the R package, "lavaan" (version 0.6-6) [55]. We fit these models using full information maximum likelihood and allowed the covariances between exogenous variables to be freely estimated [56]. This method enabled us to conduct an intent-to-treat analysis, including all participants that had a measurement for at least one variable included in the model. In these models, we estimated all autoregressive paths and lagged paths from emotion regulation success to subsequent clinical symptoms. Specifically, each path analysis model consisted of 3 regression equations. In the first equation, we regressed the T1 symptom outcome onto the T0 symptom outcome and the proportion of within-session regulation successes between T0 and T1 (ie, the proportion of negative emotions that were successfully regulated of the total number of negative emotion regulation attempts). In the second equation, we regressed the proportion of within-session regulation successes between T1 and T2 onto the proportion of within-session regulation successes between T0 and T1. Finally, we regressed the T2 symptom outcome onto the T1 symptom outcome and the proportion of within-session regulation successes between T1 and T2. (See Figure 3 for an illustration of paths with standardized coefficients.) Exploratory Analyses: Clinical and Demographic Predictors To test predictors of treatment response, we fit piecewise mixed effects models like those used in Aim 2, with the addition of interaction terms for the specified predictor. Specifically, we regressed the symptom outcome onto the interaction of the specified predictor and the number of days since the participant subscribed to the app. We examined age (continuous), gender (dummy coded with female as the reference group), number of self-reported diagnoses (continuous), whether the user was taking psychotropic medication (binary), and a whether the user was in therapy (binary) as individual difference predictors of symptom reduction. Anxiety Results are displayed in Figure 2. Participants (N participants =4144; N observations =7093) experienced a significant reduction in anxiety symptoms from T0 to T1 (b=-0.21; bootstrapped 95% CI -0.22 to -0.19; P<.001). From T1 to T2, there was no significant change in anxiety symptoms (P=.35). The conditional means (and bootstrapped SEs) at day 0, day 14, and day 28 were 12.36 (SE 0.08), 9.45 (SE 0.11), and 9.33 (SE 0.11), respectively. These differences equate to Cohen ds of 0.57 between day 0 and day 14, 0.60 between day 0 and day 28, and 0.02 between day 14 and day 28. When analyses were conducted only on participants who had completed at least two assessments (N participants =2117; N observations =5066) or all 3 assessments (N participants =827; N observations =2481), results were unchanged. Preliminary Analyses We first examined the probability that users would successfully regulate their emotion within a conversation with Youper. As described in the methods, we defined successful regulation as a conversation that started with a negative emotion and ended with either a negative emotion at a lower intensity or a positive emotion. Using a generalized linear model with logit link function and random intercepts for each participant and each preintervention discrete emotion (N participants =4120; N observations =32,885), we found that overall, participants were more likely to succeed in regulating their negative emotion than to fail (OR 4.82, bootstrapped 95% CI 3.89-5.99; P<.001). Anxiety To examine the effect of regulatory success within Youper sessions on anxiety symptoms, we fit a longitudinal path analysis model (N participants =4284; see Figure 3 for ns for each variable). The model had good fit characteristics as indicated by a significant chi-square value and standard fit statistics ( Depression In order to examine the effect of emotion regulatory success on depression symptoms, we fit a similar longitudinal path analysis model (N participants =4228; see Figure 3 for ns for each variable). This model also had good fit characteristics as indicated by a significant chi-square value and standard fit statistics (X 2 4 =50.93; P<.001; RMSEA=0.053; TLI=0.94; CFI=0.97; SRMR=0.041). For each 0.10 increase in the proportion of negative emotions that users successfully regulated between T0 and T1, they reported a 0.20 point reduction on the subsequent PHQ depression measure at T1 (P<.001). For every 0.10 increase in the proportion of negative emotions that users successfully regulated between T1 and T2, they reported a 0.13 point reduction in depression symptoms at T2 (P=.02). See Figure 3b for standardized coefficients for all paths. Secondary Analyses In addition to our primary hypotheses, we also conducted exploratory analyses of potential individual difference predictors of symptom reduction. In these analyses, we fit piecewise mixed effects models with a breakpoint at 14 days (time of T1 symptom assessment). We regressed the specified symptom assessment onto the interaction of the specified predictor and the number of days since the user subscribed to the app. We examined age, gender, whether the user was taking psychotropic medication, and whether the user was in therapy as individual difference predictors of symptom reduction. Age and Gender The interaction effects of time using Youper with age on anxiety (N participants =4143; N observations =7090) and depression (N participants =3991; N observations =6683) symptoms were not significant from T0 to T1 (P anxiety =.77; P depression =.39) or from T1 to T2 (P anxiety =.54; P depression =.43) in the piecewise regression models. Number of Self-reported Diagnoses The interaction effects of the number of self-reported diagnoses with time using Youper on anxiety symptoms (N participants =2679; N observations =4661) was not significant from T1 to T2 (P=.08), and not significant from T0 to T1 (P=.59). There was a significant interaction effect of number of self-reported diagnoses with time using Youper on depression symptoms from T1 to T2 (N participants =2738; N observations =4589; b=0.02; bootstrapped 95% CI 0.007-0.04; P=.006), but not from T0 to T1 (P=.78). This indicated that users with more diagnoses regressed modestly towards their baseline level of depression in the latter half of the treatment, whereas users with fewer diagnoses retained the treatment benefit. Medication and Therapy There were no significant interaction effects of taking prescribed medication with time using Youper on anxiety (N participants =2719; N observations =4733) or depression (N participants =2776; N observations =4654) symptoms from T0 to T1 (P anxiety =.32.; P depression =.72) or from T1 to T2 (P anxiety =.57; P depression =.66). Summary The present study had 3 aims. First, we examined the acceptability of Youper AI therapy by assessing user ratings and retention metrics among subscribers. Second, we tested whether there were significant reductions in anxiety and depression symptoms. Third, we examined the cumulative regulation hypothesis, which predicts that the frequency of within-conversation emotion regulation success would predict symptom reduction. Findings indicated that users were well retained and provided high ratings of Youper (median 5/5). As hypothesized, users showed significant reductions in symptoms in the first 2 weeks of using Youper with sustained improvements through 4 weeks from initial download. Finally, consistent with the cumulative regulation hypothesis, greater frequency of within-conversation emotion regulation successes significantly predicted greater reductions in anxiety and depression. Although no demographic predictors emerged, users with more self-reported diagnosed psychiatric conditions showed a slight return of depression symptoms between 2 and 4 weeks from first subscribing to Youper. Acceptability and Effectiveness Because retention poses a significant challenge for entirely unguided treatment programs, our finding that 60.44% (2730/4517) of users continued to engage with the app in the second week and 42.66% (1927/4517) of users continued to engage with the app in the fourth week after initial download is promising. Although there are no clearly established metrics of retention for mobile apps, a recent paper examining retention among different mobile apps showed that Youper had the highest "stickiness" (measured by the ratio of active users to downloads in a given month) compared to any other treatment app for anxiety and depression [57]. Because Youper users experienced symptom improvements on average within the first 2 weeks of app use, with the present retention rate, it is likely that a large portion of users will stick with the app long enough to experience some positive effects. It is also notable that the median satisfaction rating given by users was 5 out of 5. Taken together, these findings indicate that Youper has great potential as a highly acceptable and adequately engaging digital treatment program. Youper users showed a moderate effect size reduction for anxiety (d=0.57) and depression (d=0.46) within 2 weeks of starting app use. The reduction in anxiety symptoms was maintained through the 4-week period (day 0 to day 28: d=0.60). The reduction in depression symptoms was maintained through the 4-week period (day 0 to day 28: d=0.42) although depression increased slightly, but significantly, between weeks 2 and 4. These effect sizes are comparable to those found in RCTs of other commercially available mobile apps tested for a similar duration [9][10][11]58], suggesting that the AI therapy approach is viable for further testing in a randomized clinical trial. Youper users also had high success at regulating their negative emotions with each conversation. Given the low cost and potential for broad dissemination of Youper, these findings are particularly exciting, as they provide preliminary evidence of Youper's effectiveness as an emotion regulation tool and a transdiagnostic treatment. It is important to note, however, that the final mean PHQ score of 11.9 still fell in the moderate severity range. Thus, as we begin to understand the mechanisms of the AI therapy approach and gain greater understanding of how to maximize user engagement, we are hopeful that effects on symptom reduction will continue to improve. Youper's symptom reduction, retention, and satisfaction ratings are notable because they were demonstrated in a real-world setting. Although highly controlled feasibility pilot trials allow determination of causal inference, these studies may not be generalizable to real-world settings and may fail to address issues of external relevance and dissemination [59]. Our analysis included a very large sample of Youper users who voluntarily downloaded and purchased the Youper program. Unlike in typical research settings, users were not recruited to participate or compensated for their assessments or for providing their feedback during their participation. Observed retention rates and symptom reduction therefore have already been shown in a real-world setting and population. Cumulative Regulation Hypothesis The finding that cumulative within-session emotion regulation was strongly predictive of symptom reduction provides preliminary evidence for a potential mechanism of the AI therapy just-in-time intervention approach. Youper is theorized to enact its effects by enhancing emotion regulation skills via -in-time interventions. Thus, more effective emotion regulation sessions would indicate progress towards enhanced general emotion regulation skills and ultimately, symptom reduction. Therefore, it is promising that the effectiveness of the emotion regulation practice predicts the longer-term impacts of app use on symptom reduction. Although these results provide initial support for the theorized model underlying Youper's treatment approach, randomization is critical for rigorously testing within-session emotion regulation as a mediator of symptom reduction. Predictors of Symptom Reduction Interestingly, no demographic predictors of symptom reduction emerged. These findings are largely consistent with the existing literature where demographic features rarely predict symptom reduction [60][61][62][63][64][65][66][67]. These findings are promising, suggesting that digital treatment programs can be broadly disseminated with similar potential benefit across demographic groups. The number of comorbid diagnoses was a significant predictor of response such that users who reported more diagnosed mental health conditions showed a slight return of depression symptoms between 2 and 4 weeks from the first subscription date. These findings are consistent with prior literature showing poorer outcomes with greater comorbidity in depression treatment [68][69][70]. Users with more diagnosed conditions likely have a more severe clinical presentation, meaning that an entirely self-guided program may be less effective for this group. The finding that concurrent medication and therapy did not significantly impact symptom reduction suggests that the demonstrated effects of Youper on symptoms are unlikely to be explained by concurrent treatment, and that participating in other treatments alongside Youper does not hinder its effects. Limitations and Future Directions Despite many strengths, our study had a few limitations. First, because these data were not collected as part of a research study, we did not have a control group, making it impossible to determine whether symptom reduction was simply due to the passage of time. However, given that effect sizes for symptom reduction that we found are comparable to those found in RCTs of other mobile app programs that showed significant differences between active treatment groups and wait list controls [9][10][11]58], it is unlikely that these effects can be explained by spontaneous remission. Second, because this was an observational study, we used the symptom data that were available to us, which included only self-report measures. Although we used validated measures, solely relying on self-report does not give a complete picture of the impact of Youper on clinical symptoms and overall functioning that could be more thoroughly assessed via clinical interviews. Third, this study included only 2 brief measures as outcomes: the PHQ and the GAD-7. Although these measures are widely used and show excellent psychometric properties, additional measures of anxiety, depression, and other purported outcome targets, such as quality of life and functioning, could help us better understand Youper's effectiveness. Fourth, 47.84% (2161/4517) of Youper users were concurrently taking medication or engaging in therapy, meaning that it is possible symptom reduction resulted from participation in these other treatments rather than Youper (although concurrent treatment was not a significant moderator of symptom reduction). Finally, our emotion regulation measure was not designed to assess the magnitude of emotion regulation success, meaning that our metric included only success or failure with each conversation. These limitations should be addressed in future studies that include a control group, that assess symptoms using clinician-administered measures, that include a broader array of self-report measures, and that use more precise measures of emotion regulation success. Conclusions This study provides preliminary evidence for Youper's acceptability in a real-world setting that is unfettered by the constraints of highly controlled clinical trials. It also provides evidence of Youper's effectiveness as an entirely unguided intervention for anxiety and depression. Finally, we demonstrated that Youper's effects on symptom reduction may be explained by repeated within-session emotion regulation successes, providing preliminary support for the process by which a just-in-time intervention can be effective for the treatment of emotional disorders. Our results highlight the potential impact of Youper as a low-cost, light-touch, transdiagnostic intervention for anxiety and depression that can be broadly disseminated to improve mental health for millions of people around the world.
2021-06-23T06:17:18.819Z
2020-12-24T00:00:00.000
{ "year": 2021, "sha1": "f2c60c01434d422deb4b36c06eb37dcf0e647661", "oa_license": "CCBY", "oa_url": "https://www.jmir.org/2021/6/e26771/PDF", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fb570b1e0cb347580e8096d2c958455e609295c4", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233929396
pes2o/s2orc
v3-fos-license
Mathematical Model for Tapping Simulation to Predict Radial Pitch Diameter Difference of threads As the most important component of parts, thread has a great influence on the mechanical properties and service performance of parts. In order to ensure the quality of the thread, the thread quality inspection standard involves 11 main thread characteristic-s, of which the surface roughness of the thread is the most studied, and the research on Radial Pitch Diameter Difference (RPDD) is still blank. In this paper, a quasi-static model of the tapping process is developed based on the roundness error mechanism of the hole, which includes cutting force and cutting damping force. Due to the regenerative nature of cutting, the force on each cutting edge depends on both the tool’s current position and previous position. According to the eigenvalues and eigenvectors of the discrete state-transition matrix, RPDD is finally determined, and the influence of the chamfer length and the spindle speed on RPDD is simulated by this model. The results demonstrate that the chamfer length and spindle speed will affect RPDD, and the RPDD is the smallest when the chamfer length is 2 threads and the spindle speed is 1400 rev/min. The development of this model not only provides a cheap and effective method for the study of RPDD, but also lays a foundation for further experimental research. Introduction In modern industry, threaded connections are often used when parts and pipe connections need to be assembled in a non-destructive manner. The performance of threaded connections is critical to many fields such as petroleum, aerospace, shipbuilding, high-speed rail, nuclear energy, and automobile manufacturing [1][2][3].According to statistics, threaded connections generally account for 60% of the total mechanical components in each machinery and equipment [4].Therefore, it is of great significance to improve thread processing quality [5]. Poor thread quality will affect the mechanical properties and service performance of the components, such as tensile strength, torsion strength, vibration resistance, connection reliability, pipeline tightness, etc. [2,3,6].The factor that has the greatest influence on the mechanical properties and service performance of threaded components is the geometry and dimensional accuracy of the thread [2].Dong [2] and Leon [7] studied the influence of thread dimensional conformance on the selfloosening resistance and static strength of the threaded connection which will be greatly reduced due to the unqualified outer diameter, middle diameter or inner diameter of bolts and nuts. Nassar et al. [8] showed that unqualified thread root radius will adversely affect the fatigue performance of pre-tightened threaded fasteners. In order to ensure the quality of thread, the thread quality inspection standard involves 11 main thread characteristics, and RPDD is one of them. RPDD is defined as the maximum difference among the pitch diameters in all radial directions within a lead. When a thread with an excessive RPDD is matched with a qualified thread, the sharply changing contact area between the internal and external threads may cause uneven load distribution on the thread surface, and severe stress concentration occurs at the position with a smaller pitch diameter, which accelerates wear and tear, shorten the service life, and severely cause "looseness" or even "slip buckle" [3]. Compared with external threads, it is more difficult to ensure the quality of internal threads, especially small diameter internal threads. The machining of internal threads is very complicated, which is usually the last process of workpiece manufacturing. Any machining failure or reduced accuracy can not achieve the perfect assembly of components without gaps, and even lead to huge economic losses [9,10]. Tapping is a process widely used in the manufacturing of internal threads. Although in the past few years, other processing methods have been used to manufacture internal threads with great success, such as thread milling and turning [11][12][13][14][15][16], tapping is almost the only method to manufacture small diameter internal threads [17]. However, there are relatively few literatures on thread quality in tapping. Fromentin et al. [18] showed that the use of suitable oil-based lubricants in tapping can improve the mechanical properties and surface integrity of internal threads. Piska et al. [19] compared taps with PVD and TiN composite coatings and uncoated taps to process C45, and the results showed that the use of coatings can effectively reduce tool damage and improve the surface finish of threads. Hsu et al. [20] used the Taguchi method to test and analyze the influence of tool parameters and cutting conditions on thread quality during tapping, and found that the larger the helix angle, the worse the quality of the thread. Bratan et al. [21] developed a tool to improve the threading process considering the combined deforming-cutting tap. The results have shown that utilising proposed combined deforming-cutting taps in the processing of internal threads with a small diameter for M3-M6 in aluminium alloys enhanced the accuracy and surface quality of threads. It can be seen that the current literatures on the thread quality obtained by tapping is mostly focused on the influence of various factors on the thread surface quality, and the research on the RPDD of the thread is almost blank. Therefore, it is very necessary to analyze the factors that affect the dimensional of R-PDD by establishing a mathematical model of RPDD. For a standard internal thread, its pitch diameter can be determined by the nominal diameter and pitch of the thread, that is, pitch diameter = nominal diameterpitch × 0.6495.Therefore, it can be considered that R-PDD of the thread is the result of the combined effect of the roundness error of the root circle and the pitch error. The mechanism of roundness error of the root cycle is similar to that of the hole. From the literature [22], it is known that the roundness error of the hole is largely caused by the vibration of the drill. The tap usually shows transverse vibration, torsional vibration and axial vibration during tapping, but there is no coupling relationship between the transverse vibration and axial/torsional vibration of the tap [23]. Because the torsional stiffness and axial stiffness are much greater than the bending stiffness, the torsion and axial displacement of a cantilever tool (such as a tap) with a large length-to-diameter ratio is much smaller than the transverse displacement under a given force. Therefore, torsional vibration and axial vibration are ignored, that is, the pitch error generated in tapping is ignored. The pitch diameter of the thread is only related to the nominal diameter of the thread, which means that the R-PDD is caused by the roundness error of the root cycle. Bayly et al. [24] established a quasi-static reaming model to explain the vibration of the tool during the cutting process and the resulting roundness errors in reamed holes, and process parameters such as the direction and the angular frequency of rotation of the tool will affect the dimensional of the roundness error. Deng et al. [25] believed that the dynamic excitation force will cause the deflection and the vibration of the tool, leading the roundness error of the hole, and developed a system control equation composed of exciting force to study mechanism of roundness error of the hole. This will provide great help for this research. Since the previous research in this area is still blank, the blind cutting experiments are not only difficult to determine the factors that affect the dimensional of R-PDD in tapping, but the cost of the experiment is also very high. Based on previous studies, this paper will establish a quasi-static model of vibration with reference to the analysis method of roundness error of the hole, and finally determine the dimensional of RPDD, laying a solid foundation for the next step of experimental research. Quasi-static model During the tapping process, as the tap enters the predrilled hole at an axial rate synchronized with the thread lead, each tooth cuts a layer of material from the surface formed by the previous tooth. In order to ensure the smooth execution of the cutting process, the thread end of the tap is made into a tapered surface with κ r as a chamfer to truncated the full thread, and a number of straight flutes for chip evacuation evenly distributed over the entire thread length divide the thread into several continuously distributed cutting teeth. The major cutting edges of the chamfered section of the tap are formed by the intersection of the tapered surface and the straight flute surfaces, and the minor cutting edges lie on the flank surfaces of the tap threads. As can be seen from the above description, the geometry of the tap is very complicated. In order to convenience the description of the tapping process, some coordinate systems first need to be established. The global coordinate system X-Y-Z whose origin O 1 is at the center of the hole is fixed on the workpiece surface, and the Z axis coincides with the hole axis. The cutting edge coordinate system is U-V-W with the origin O 2 at any designated point of the elementary cutting edge, and the W axis coincides with the major cutting edge. The local rotating coordinate system T-R-A is the transition coordinate system between the global coordinate system and the cutting edge coordinate system. Rotating the global coordinate system around the Z axis through φ i can be converted into the T-R-A coordinate system that changes with the position of the elementary cutting edge, and then rotating this coordinate system around the X and Y axes through λ and κ r respectively to convert to cutting edge coordinates system, as shown in Figures 1a and 2a. The three coordinate systems can be transformed into each other through the transformation matrix. The relationship between them can be expressed as: where As shown in Figure 1b, the dynamic model of the tap only considers the two orthogonal degrees of freedom of the tool in the radial direction. The differential equation of the two-degree-of-freedom transverse vibration of the dynamic tapping system in the global coordinate system is: among them, M, C, K are the mass matrix, damping matrix and stiffness matrix of the tap respectively, and they are all second-order matrices. Whitehead [26] and Towfighian [27] mentioned that the vibrations to cause roundness errors are divided into two categories: chatter and low frequency vibration. In tapping process, the speed of the tap is usually low, and the frequency is much lower than the lowest natural frequency of the tool. At this time, the stiffness term in the equation plays a leading role, while the inertia term and damping term can be ignored. Therefore the above equation becomes the following form: Equation 4 eliminated the inertia and damping terms in the transverse vibration differential equation is called a quasi-static model. The exciting force F includes regenerative cutting force and cutting damping force due to the waviness on the pre-drilled surface. For the convenience of research, let the number of tap flutes be N f , and the tap are evenly dispersed into N z slices with length dz along the axial direction, then the force of the entire tap is distributed to the N f × N z elementary cutting edges. According to the cutting principle of the tap, each elementary cutting edge is regarded as an oblique cutting model. Cutting force The cutting force of oblique cutting is proportional to the cross-sectional area of the chip to be removed. There-fore, the cutting force F c can be calculated by the following formula: where k c is the cutting force coefficient, h is the cutting thickness, and b is the cutting width. When the tool's actual trajectory (shown by the dashed line in Figure 2b) deviates from the nominal one due to transverse vibration (shown by the solid line in Figure 2b), an additional dynamic uncut chip thickness will attach to the desired static uncut chip thickness. Since the static uncut chip thickness does not cause vibration, only the load caused by the dynamic uncut chip thickness remains in the cutting force. Therefore, the radial, tangential and axial components of the cutting force of elementary cutting edge in the cutting edge coordinate system U-V-W can be expressed as: where ∆h ij represents the dynamic uncut chip thickness of the j th elementary cutting edge corresponding to the i th flute. If the transverse displacement of the tap axis at time t is represented by x(t) and y(t), the transverse displacement ∆R ij of the elementary cutting edge can be determined by the displacement of the tap axis and the tooth angle φ i (Figure 1c): where τ 0 is the time interval between adjacent teeth. The dynamic uncut chip thickness can be expressed as: As the tapping depth increases, each cutting edge will experience partial and full engagement status. Therefore, the cutting force is constantly changing with the effective cutting edge length involved in cutting during the continuous tapping process. It can be seen from the geometric relationship in Figure 3 that the length of the cutting edge can be determined by its start and end positions in the axial direction. The start and end positions of each cutting edge in the axial direction can be expressed as: where k represents the serial number of the cutting edge, h ks is the start position of the k th cutting edge, h ke is the end position of the k th cutting edge, and P is the pitch. Since the tap cutting edge is not continuously distributed, the window function g(z) is introduced to identify whether the elementary cutting edge corresponding to a certain flute engages with the workpiece at time t. The elementary cutting force can be expressed in the U-V-W coordinate system as: The elementary cutting force in the T-R-A coordinate system can be expressed as follows based on coordinate transformation: k cu = τ s sin ψ n cos(β n − γ n ) + tan λ tan η sin β n cos 2 (ψ n + β n − γ n ) + tan 2 η sin 2 β n k cv = τ s sin ψ n cos λ sin(β n − γ n ) cos 2 (ψ n + β n − γ n ) + tan 2 η sin 2 β n k cw = τ s sin ψ n cos(β n − γ n ) tan λ + tan η sin β n cos 2 (ψ n + β n − γ n ) + tan 2 η sin 2 β n (14) where τ s is the shear stress, ψ n is the normal shear angle, λ is the inclination angle, β n is the normal friction angle, γ n is the normal rake angle, and η is the chip flow angle, and these values can be obtained according to the method in literature [28,29].In the T-R-A coordinate system, the cutting force of all elementary cutting edges corresponding to a certain flute can be expressed as: where l 0 is the start cutting position, l is the current cutting position, they can be expressed as: where D h is the diameter of the pre-drilled hole with a value of 8.5mm, d is the tap nominal diameter, and d 0 is the chamfer point diameter. By converting the cutting forces of all flutes from the local coordinate system to the global coordinate system and then adding them, the cutting forces ∆F cx , ∆F cy and ∆F cz in the tangential, radial and axial directions of the tap can be calculated as follows: through simplification, Equation 18 can be expressed as: where Cutting damping force The actual cutting edge is not sharp, but is a small arc, as shown in Figure 4b.Therefore, the edge force and the cutting damping force are generated because the material under the flank face of the cutting edge is extruded [30].Since the edge force does not cause vibration, only the cutting damping force remains. The cutting damping force is caused by the interference between the uneven workpiece surface caused by the tool vibration and the tool flank surface, which causes the effective flank angle α ef f (as shown in Figure 4a) to change. Literature [31] mentioned that the cutting damping force is composed of the normal force F dv and the friction force F du , and the normal force F dv is proportional to the volume V of the workpiece material extruded under the flank face of the cutting edge: where K sp is the specific contact force, which depends on the material and the geometry of the cutting edge. The volume V is related to the geometry of the cutting edge, the vibration speedċ perpendicular to the machined surface and the cutting speed v c (Figure 4a): where l w is the interference length between the workpiece surface and the flank face of the tool. According to the model proposed by Ahmadi et al. [30], the interference length l w can be written as: where r h is the hone radius, α is the flank angle and θ is the separation angle which defines the position where the workpiece material is no longer removed as chips through the rake face, but is extruded under the flank face of the tool. Therefore, the normal component F dv of the cutting damping force can be written as: where The friction component F du of the cutting damping force is considered to be proportional to the normal component F dv , so it can be written as: where µ is the Coulomb friction coefficient, which is taken as 0.3 according to literature [32]. Therefore, the radial and tangential components of the cutting damping force of the elementary cutting edge in the U-V-W coordinate system can be expressed as: where n is the spindle speed, v c is the cutting speed at different axial positions. v c = [πd 0 + nP t tan κ r ]n Substituting the dynamic uncut chip thickness into Equation 28, and then based on the coordinate transformation, the elementary cutting damping force in the T-R-A coordinate system can be determined as follows: Like the method of obtaining the cutting force, the tangential ∆F dx , radial ∆F dy and axial ∆F dz components of the cutting damping force can be expressed as: ∆x ∆y (31) through simplification, Equation 31 can be expressed as: Elastic force In the quasi-static equation, KX is the elastic force, where K is the stiffness matrix of the tool. For a twodegree-of-freedom system, it can be expressed as: Due to the small bending deformation of the tool, the vibration of the tool can be considered to be restricted to a plane perpendicular to the axis of the undeflected tool. Therefore, the relationship between the elastic force and the axis offset can be expressed as: It is assumed that the tap is a cantilever beam whose force is concentrated on the tip of the tool in this analysis. According to literature [27],the stiffness matrix can be written as: where EI is the bending stiffness of the tool material,E is the elastic modulus of material of the tap,and L is the total length of the tap. Solutions of quasi-static equations Now substituting Equations 19 and 32 into Equation 4, the quasi-static equilibrium equation is rewritten as follows according to the displacement of the tap axis: The current value of X is expressed as a function of the previous value, and the friction time delay τ is approximately a fraction of the cutting time delay τ 0 , that is, τ = τ 0 /m. Therefore, the successive tool axis position equation can be expressed as: where Equation 38 is a difference equation with variable coefficients. In order to obtain the solution of the equation, based on the freezing coefficient method [33], the successive cutting process is discretized into several steps with time τ , each of which is approximated by a constant coefficient difference equation. For the convenience of solving, Equation 38 is written as a large matrix form: . . . this is where the matrix Q t−τ is the state-transition matrix at time t−τ ,I is a second-order identity matrix. According to the literature [34], Equation 41 can be written as follows: where λ is a characteristic exponent, whose imaginary and real parts represent the oscillation frequency and growth or decay rate respectively. Equation 42 is a characteristic equation with m eigenvalues. The eigenvalues and eigenvectors satisfying the matrix Q t−τ are µ t−τ and q t−τ respectively, which can be determined by solving the characteristic equation. Therefore, the instantaneous position vector X i t−τ of m modes at time t − τ can be extracted from the m eigenvectors respectively. The displacement vector X i of the i th mode at time t can be written as: At time t, the displacement vector X of the tool axis can be obtained by superimposing the displacement vectors of m modes. The pitch diameter of the thread will also change with the axis offset. The pitch diameter at time t can be expressed as: where D 2 (t) is the pitch diameter of the thread at time t, and D is the nominal diameter of the thread which is equal to the nominal diameter d of the tap,the offset ∆D(t) at time t can be expressed as: where φ(t) is the angle of rotation in time t. RPDD of the thread can be expressed as the difference between the maximum and minimum pitch diameters: where D 2max and D 2min are the maximum and minimum pitch diameters of the thread, respectively. Time domain simulation is performed based on the tool motion equation of Equation 38 , which is also quasistatic. The static equilibrium position of the tool, sub- ject to regenerative cutting and cutting damping forces, is found at every time step in the simulation, and then the dimensional of RPDD is determined by the Equation 47.The tool used in the simulation is a standard straight flute high-speed steel tap, and its parameters are shown in Table 1.The workpiece is an AISI1045 plate with pre-drilled holes. In the cutting simulation, 6 groups of tests whose parameters are listed in Table 2 were performed to study the influence of the chamfer length and the spindle speed on RPDD. In order to analyze the influence of the chamfer length and the spindle speed on RPDD, the eigenvalues and eigenvectors of all modes of each group of tests are obtained by simulation. The vibrations of first four modes which are selected from all modes according to the rate of decay or growth of each mode because of their importance are considered here, and RPDD obtained is a combination of these four modes. Figure 5 shows the change of the pitch diameter of the first full thread and the movement trajectory of the outermost cutting edge of the tap in the global coordinate system during the entire cutting process when the chamfer length is 2 threads, 4 threads and 8 threads. From the comparison of the three figures, it is found that the change of the chamfer length has an effect on RPDD.As can be seen in Figure 7a, the dimensional of RPDD shows a trend of first increasing and then decreasing with the increase of the chamfer length and the smallest RPDD is obtained when the chamfer length is 2 threads. The possible reason is that fewer cutting edges are involved in cutting, causing the vibration amplitude to increase briefly and then become a steady periodic vibration. Figure 6 shows the change of the pitch diameter of the first full thread and the movement trajectory of the outermost cutting edge of the tap in the global coordinate system during the entire cutting process when the spindle speed is 700 rev/min, 1050 rev/min, 1400 rev/min and 1750 rev/min. From the comparison of the three figures, it is found that the change of the spindle speed has an effect on RPDD.As can be seen in Figure 7b, the dimensional of RPDD shows a trend of first decreasing and then increasing with the increase of the spindle speed and the smallest RPDD is obtained when the chamfer length is 1400 rev/min. The possible reason is that more cutting edges quickly participate in cutting when the spindle speed increases, resulting in a rapid increase in damping and a decrease in amplitude. But at the same time, the increase of the spindle speed will also lead to the decrease of the damping change rate, so a higher speed will restrain the amplitude reduction. Summary A quasi-static tapping model, including delayed cutting force and cutting damping force was established. A similar cutting force model and cutting damping force were developed based on the work of previous researchers. The force equilibrium equation is expressed as a discretetime matrix equation describing the successive state of the tool. According to the eigenvalues and eigenvectors of the state transition matrix, the characteristic solution of the equilibrium equation is found, and then R- This model was used to simulate the tapping of AISI1045 plates using standard straight flute high-speed steel taps, and the dimensional of RPDD under different chamfer lengths and spindle speeds was obtained. By comparison, it is found that the chamfer length and spindle speed will affect RPDD, and RPDD is the smallest when the chamfer length is 2 threads and the spindle speed is 1400 rev/min. Since the research on R-PDD is still blank so far, the development of this model not only provides a cheap and effective method for the study of RPDD, but also lays a foundation for further experimental research. This model provides a method for the research of RPDD and the research on the stability of tapping in the literature [23] also confirms the conclusion of this article, but it needs to be verified by experiments in order to better illustrate its effectiveness. In addition, in the actual tapping process, the distributed force acts on the entire engagement length of the tool, but this model applies concentrated force on the tip of the tool, which will have a certain impact on the accuracy of the model. These issues will be studied in the future. Funding information The research is financially supported by the National Natural Science Foundation of China (No. 51275333) Conflict of interest The authors have no relevant financial or non-financial interests to disclose. Availability of data and material The data sets supporting the results of this article are included within the article and its additional files. Authors' contributions Jie Ren developed a mathematical model to predict the radial diameter difference of threads, analyzed the simulation results, and was a major contributor in writing the manuscript. Xianguo Yan provided guidance for the writing of manuscript. All the authors read and approved the final manuscript. Ethics approval All analyses in this paper are based on previously published research and this paper does not involve animal and human testing, so this item is not applicable to this paper. Consent to participate All analyses in this paper are based on previously published research and this paper does not involve animal and human testing, so this item is not applicable to this paper. Consent for publication The Author confirms: that the work described has not been published before (except in the form of an abstract or as part of a published lecture, review or thesis); that it is not under consideration for publication elsewhere; that its publication has been approved by all co-authors ,if any; that its publication has been approved (tacitly or explicitly) by the responsible authorities at the institution where the work is carried out. The author agrees to publication in the Journal indicated below and also to publication of the article in English by Springer in Springer's corresponding Englishlanguage journal. The copyright of the English article is transferred to Springer effective if and when the article is accepted for publication. Tool path and cutting damping force geometry Change of the pitch diameter of the first full thread and the movement trajectory of the outermost cutting edge of the tap in the global coordinate system during the entire cutting process when the chamfer length is 2 threads, 4 threads and 8 threads Change of the pitch diameter of the first full thread and the movement trajectory of the outermost cutting edge of the tap in the global coordinate system during the entire cutting process when the spindle speed is 700rev/min, 1050rev/min, 1400rev/min and 1750rev/min
2021-05-08T00:04:19.383Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "7c66e484af3e8dee745541adef032206f5e3768b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-192937/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "492ae51b870f9629aa194d424047b2804e0f8d84", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Mathematics" ] }
55116993
pes2o/s2orc
v3-fos-license
Dual control of fault intersections on stop-start rupture in the 2016 Central Italy seismic sequence Abstract Large continental earthquakes necessarily involve failure of multiple faults or segments. But these same critically-stressed systems sometimes fail in drawn-out sequences of smaller earthquakes over days or years instead. These two modes of failure have vastly different implications for seismic hazard and it is not known why fault systems sometimes fail in one mode or the other, or what controls the termination and reinitiation of slip in protracted seismic sequences. A paucity of modern observations of seismic sequences has hampered our understanding to-date, but a series of three M w > 6 earthquakes from August to November 2016 in Central Italy represents a uniquely well-observed example. Here we exploit a wealth of geodetic, seismological and field data to understand the spatio-temporal evolution of the sequence. Our results suggest that intersections between major and subsidiary faults controlled the extent and termination of rupture in each event in the sequence, and that fluid diffusion, channelled along these same fault intersections, may have also determined the timing of rupture reinitiation. This dual control of subsurface structure on the stop-start rupture in seismic sequences may be common; future efforts should focus on investigating its prevalence. Introduction In regions of distributed continental faulting, networks of active faults are commonly segmented on length scales of 10-25 km, approximately equal to the seismogenic thickness of the Earth's crust (Scholz, 1997;Stock and Smith, 2000;Klinger, 2010). This intrinsic maximum fault size limits the magnitude of continental earthquakes that rupture a single fault or segment to <M w ∼ 6-7 (Pacheco et al., 1992;Triep and Sykes, 1997), depending on local seismogenic thickness and fault geometry. Therefore, large continental earthquakes above this threshold (Scholz, 1997) such as the 2010 M7.2 El Mayor-Cucapah, Mexico, 2016 M7.8 Kaikoura, New Overview of epicentral region and fault geometry used in this study. (a) Regional tectonic map, showing mapped active normal faults (magenta, modified from Roberts and Michetti, 2004), up-dip surface projection of model faults displayed in Figs. 6-9 (coloured to match (c) and Figs. 3, 4 and 6), and bodywave focal mechanisms for each earthquake (see Fig. 2). White dashed line shows inferred east-dipping fault from Fig. 7, and relocated aftershocks from Chiaraluce et al. (2017) are shown in black. Locations of short-baseline GNSS instruments are shown by blue triangles. Black box shows extent of Fig. 3c. (b) Regional map showing the location of (a) and direction of regional crustal extension. (c) 3D cartoon of model fault geometry adopted in this study. Thick coloured lines show the surface projection of each fault and correspond to the coloured faults in (a) and Figs. 3, 4 and 6. (For interpretation of the colours in this figure and other figures, the reader is referred to the web version of this article.) ther dynamic and static stress transfer cause cascading failure of multiple critically-stressed faults or rupture is arrested before all these faults have failed. In the latter case the start of rupture in subsequent subevents determines the temporal evolution of the seismic sequence. Large earthquakes and seismic sequences have vastly different implications for seismic hazard: high hazard in a single event, or moderate hazard spanning years or potentially decades. But our understanding of what controls whether multifault rupture occurs over days to years or in seconds, and of what controls the spatio-temporal evolution of seismic sequences, has been severely limited by a paucity of high-resolution observations of modern seismic sequences. Combined analysis of geodetic and seismological data can image stop-start rupture behaviour and address these questions, by disentangling the spatial pattern and temporal evolution of slip in seismic sequences at high resolution. A sequence of 3 M w > 6 earthquakes from August to November 2016 in the Central Apennine mountains, Italy (Fig. 1) presents a rare chance to investigate a seismic sequence with modern datasets and here we exploit seis-mological and field observations, as well as geodetic data, to image the kinematics of the sequence, and to understand structural and dynamic controls on its evolution. Our results suggest that structural complexity, namely the intersections between two sets of oblique faults, may have played an important dual role in the Central Italy seismic sequence: first by limiting the extent of individual ruptures and second by channelling fluid flow and controlling the timing of subsequent failure throughout the sequence. Seismological constraint on earthquake source mechanisms The Central Italy seismic sequence started with an M ∼ 6 earthquake on the 24th August 2016, and was followed by tens of thousands of aftershocks, including two large M > 6 events on the 26th and 30th October (Chiaraluce et al., 2017, Fig. 1). We refer to these three major earthquakes as the Amatrice, Visso and Norcia events respectively. The seismic sequence continued into 2017, with several earthquakes M < 5.7 on January 18th, but here we focus on the three largest events only. Left column shows the best-fit seismological focal mechanism (black) determined using MT5 software package (Zwick et al., 1994). For each of the Amatrice, Visso, and Norcia earthquakes, the composite geodetic mechanism from the geodetic solution is shown, as both the full moment tensor (dashed green line) and best double couple (solid green line). Best-fit seismological mechanism parameters and source time function are also given. Central panels show depth-misfit curves. Each point on the curve is determined by fixing the centroid depth to the given value, and inverting the waveform data to determine the best fit mechanism and source-time function at that depth. Vertical green bars show the centroid for the geodetically-derived slip distributions shown in Fig. 6 for the Amatrice, Visso and Norcia earthquakes, for comparison. Right panels show dip-misfit plots. On each, red indicates the SW-dipping plane, blue the NE-dipping plane. Waveforms and best-fit synthetics for each earthquake are shown in Supplementary Figs. S1-S3. For each of these three earthquakes, we invert teleseismic longperiod body waves for the best-fit focal mechanism (Fig. 2, Supplementary Figs. S1-S3). We treat each earthquake as a finite-duration point-source centroid, with a moment-release function parameterised by a series of 1 s triangular elements. We invert P and S H waveforms to determine the moment-rate function, a centroid depth, total moment and a focal mechanism (strike, dip, rake), using a least-squares approach (e.g. see Walters et al., 2009). We estimate a seismological M w of 6.2, 6.1 and 6.6 for the Amatrice, Visso and Norcia earthquakes respectively, and find similar normal-faulting mechanisms on NNW-SSE striking faults for each event. Our seismological results suggest shallow centroid depths of ∼4 km and relatively shallow dips for all three earthquakes (<∼45 • for the Visso and Norcia earthquakes, <∼50 • for the Amatrice earthquake; Fig. 2). The centroid depths estimated here agree well with previous seismological estimates (e.g., Chiaraluce et al., 2017;Pizzi et al., 2017). All these seismological results are consistent in indicating that the majority of moment release is concentrated within the upper ∼8 km of the crust, with centroids from 3-6 km depth for all three of the largest events. Comparison between the focal mechanisms estimated here, and the composite focal mechanisms resulting from finite-fault inversion of near source data shows a good agreement in strike and dip, with a slight (∼10 • ) difference in rake that likely results from collapsing a distributed pattern of slip across several faults into a simple composite mechanism (Pizzi et al., 2017). Field measurement of surface ruptures The normal faulting mechanism from our seismological results is consistent with the NE-SW extension that characterises the Apennine mountain belt (D'Agostino et al., 2011, Fig. 1b). The earthquakes occurred in the region of the SW-dipping Laga and Mt. Vettore (hereafter referred to as 'Vettore') normal faults. Of these two major faults, the Laga fault was thought to have last ruptured in a major earthquake in 1639 A.D. (Rovida et al., 2016), whilst the Vettore fault was only known to be active from palaeoseismic investigations (Galadini and Galli, 2003). Following each of the three major events, we started mapping the surface ruptures the day after the earthquake, contributing to the compilation of measurements collated by the openEMERGEO international working group (Civico et al., 2018). Measurements of throw, heave, net slip and slip vector together with fault strike and dip were collected in the region of the Vettore and Laga faults, using a handheld compass clinometer and a ruler. For the Amatrice earthquake, measurements were collected over the 1 month following the earthquake. For the Visso earthquake, due to the short time between this event and the subsequent Norcia earthquake, the identification and mapping of the ruptures is likely incomplete and we were only able to collect measurements of the slip. The Amatrice and Visso earthquakes each generated semicontinuous surface ruptures with ∼10-20 cm slip on pre-existing bedrock scarps, towards the southern and northern ends of the mapped Vettore fault respectively (Figs. 3,4). In contrast, the Norcia earthquake ruptured portions of the Vettore fault along its entire length, generating offsets up to 2.3 m along the central portions of the fault, and re-rupturing some of the same sections that failed in the earlier earthquakes (Figs. 3, 4). These sections re-ruptured with the same kinematics but approximately an order of magnitude greater slip. During the Norcia earthquake, numerous smaller faults in the hangingwall of the Vettore fault were also activated, including metre-scale displacement of an antithetic structure ∼2 km SW of the main fault (Fig. 4c). These field measurements are consistent with our seismological solutions and the regional extension direction. Geodetic datasets We measure the surface displacements during the seismic sequence with eighteen separate geodetic datasets: one regional GNSS dataset for each of the three earthquakes (INGV, 2016a(INGV, , 2016b, one short-baseline GNSS dataset for the Norcia earthquake (Wilkinson et al., 2017) and fourteen InSAR datasets from the Sentinel-1 and ALOS-2 satellites, each constraining the coseismic displacement fields from one or more of the three earthquakes ( Fig. 5g, Supplementary Fig. S4). We processed Sentinel-1 Synthetic Aperture Radar interferograms using the GAMMA software package (http://www.gammars .ch) and unwrapped them using the MCF algorithm (Costantini, 1998). ALOS-2 interferograms were generated using the JPL/Caltech/Stanford ISCE package (https://winsar.unavco .org /isce .html) and unwrapped using the SNAPHU method (Chen and Zebker, 2002). Orbital effects were corrected using precise orbits from the European and Japanese Space Agencies respectively, and topographic effects were removed using 1-arcsec topographic data from the Shuttle Radar Topographic Mission (Farr et al., 2007). Unwrapping errors were manually checked and corrected. Interferograms were resampled in preparation for modelling using a nested uniform sampling approach (e.g. Floyd et al., 2016), with higher density in the nearfield and lower density in the farfield, to obtain about 1500 line-of-sight data-points per interferogram ( Supplementary Fig. S4). Model fault geometry In order to relate our geodetic measurements of surface displacement to slip on faults in the subsurface, we first define a simplified array of nine rectangular model faults (Fig. 1a, c, Supplementary Table 1). This level of complexity in source geometry is commonly required for geodetic modelling of multi-segment, moderate-magnitude events like the Norcia earthquake (e.g. the S. Napa, California and Darfield, NZ earthquakes; Floyd et al., 2016;Elliott et al., 2012), and requires that fault geometries are fixed prior to inversion, often on the basis of additional geological or geophysical constraints. For the Central Italy seismic sequence, we use a wealth of such additional information to define fault geometries at the surface and at depth, including: 1) discontinuities and low-coherence regions in our InSAR data (e.g. Fig. 5a, c, e), which are commonly indicative of surface or near-surface faulting; 2) our field mapping of surface ruptures ( Fig. 3; Civico et al., 2018); 3) relocated aftershock clouds (Chiaraluce et al., 2017; e.g. Supplementary Fig. S5) and; 4) our body-wave focal mechanisms. Our model geometry primarily consists of four major fault segments: three segments for the main Vettore fault, and one for the northern Laga fault. The locations, strikes and dips of these four faults are well constrained by the above datasets. We place three additional minor faults within the hangingwall of the Vettore fault; one antithetic and two synthetic (Fig. 1c). These minor faults are necessary to explain important near-fault complexity both in the geodetic displacements for the Norcia earthquake (e.g. the region between the central Vettore fault and the minor antithetic fault in Fig. 5c) and the complex array of decimetric surface ruptures we mapped in the field (Fig. 3). Tests showing the increased local misfit to these data when the minor faults are each removed from the model are shown in Supplementary Figs. S13-S24 and summarised in Supplementary Table 2. We note that whilst the geodetic and field data constrain the surface location of these three faults and slip in the shallow subsurface, the geodetic data are insensitive to their geometry at depths greater than 1-2 km due to strong trade-offs with slip on the main Vettore fault. Two more faults represent: a 12-km long ENE dipping structure antithetic to the Vettore fault that we call the Norcia Antithetic fault; and a NE-SW striking 14-km long normal fault we call the Pian Piccolo fault that cross-cuts the Castelluccio plain between the Vettore fault and Norcia fault system to the SW (Figs. 1, 3c). The intersections between model faults at depth are supported by alignments of relocated aftershocks (from Chiaraluce et al., 2017) when projected onto our model fault planes (Figs. 6, 7). The Pian Piccolo fault is required to explain a strong NE-SW aligned signal in the InSAR data that covers the Norcia earthquake (e.g. Fig. 5e). In addition, the location and strike of this fault are supported by the geomorphology of the Castelluccio plain ( Fig. 3d), previous geological mapping (Coltorti and Farabollini, 1995) and by relocated aftershocks (Fig. 7), the latter of which also constrains the dip at ∼40 • . Tests removing this structure require major (>2 m) slip in the Norcia earthquake on the Vettore fault at depths greater than 10 km, making the geodetic centroid depth incompatible with the centroid depths obtained from body-wave seismology (Fig. 2, Supplementary Figs. S13, S15). The Norcia Antithetic fault is strongly delineated in the aftershock data (Chiaraluce et al., 2017) so we include this structure in our model geometry, with this fault truncated at its southern end by the Pian Piccolo fault. This structure is less well constrained by the geodetic data than the other eight faults, and we ran several tests with: the Pian Piccolo fault truncated instead by the Norcia Antithetic fault; the two faults crossing and neither truncating the other; and the Norcia Antithetic fault removed completely. Whilst our preferred geometry for the Norcia Antithetic fault (Fig. 1c) does improve the fit to both the InSAR and GNSS data for the Norcia earthquake, these alternative geometries only result in marginally higher misfit to the data. However, it is important to note that in all of these alternative geometries, the distribution of slip on the Vettore-Laga fault system, and therefore also the major findings of this study, remain the same. Inversion for distribution of slip in the sequence Each of the nine model faults are discretised into patches ∼1 km along strike × 1 km in depth (Supplementary Table 1). We solve for the distribution of slip across this fault array during four discrete intervals: three coseismic intervals associated with each of the three M > 6 earthquakes and one postseismic interval that follows the Amatrice earthquake and precedes the Visso earthquake (Fig. 5g, red stars and arrow). We jointly invert all geodetic data for slip in these intervals following the method employed by Floyd et al. (2016). Surface displacements are modelled as resulting from slip on rectangular dislocations in an elastic half-space (Okada, 1985), with shear modulus 3.23e10 and a Poisson's ratio of 0.25. We solve for two components of slip for each fault patch to allow spatially-varying rake, with a non-negative constraint on the inversion. We also force slip to be zero on certain masked regions of our model during the first three time intervals (hashed regions on Fig. 6). This is for two reasons: to prevent fitting of noise in geodetic data away from the earthquake of interest, and to prevent high slip on shallow model patches that have low temporal resolution (see below) and for which field mapping revealed no significant slip in the relevant time interval. Relative to the InSAR data, the regional and short-baseline GNSS data are weighted by factors of 30 and 6 respectively, to take into account both the relative variance of the different datasets and the much larger number of InSAR measurements. GNSS uncertainties are those given as formal uncertainties, and InSAR covariance is estimated for each dataset by fitting an exponential function to the 1D radial autocovariance from a non-deforming region of the data (e.g. Funning et al., 2005). We tested variations in the relative weightings, and higher weightings of the InSAR data led to degradation in the fit to the GNSS data without significant improvement to the fit to the InSAR, essentially overfitting noise in the InSAR ( Supplementary Fig. S11). We regularise our inversion using a Laplacian smoothing criterion, and choose a smoothing factor that represents a compromise between smoothness of the solution and goodness-of-fit to the geodetic data ( Supplementary Fig. S12). Whilst changing the smoothing factor within reasonable bounds changes the peak magnitude of slip, it does not affect the spatial pattern of slip. We estimate uncertainties on our geodetic slip distributions using a Monte Carlo approach, (e.g. Funning et al., 2005, Supple-mentary Fig. S6), and estimate the spatial and temporal resolution of our slip model using a resolution spike test (Du et al., 1992;Supplementary Figs. S7 and S8). Recovered distribution of slip The recovered slip distributions for the three coseismic intervals are shown in Fig. 6, along with composite geodetic focal mechanisms. We compare these focal mechanisms to our body-wave solutions in Fig. 2, and find that the total moment release, the centroid depth, and the geometry of the SW-dipping nodal plane match extremely well in all cases. The slip vector (and hence auxiliary plane) in the seismological solutions shows some discrepancy with the composite geodetic moment tensors, with the geodetic solutions in each case incorporating a slight oblique component to the moment tensor, compared with the almost pure dip-slip seismological moment tensors. Fig. 6 shows that slip is confined to the top ∼6 km of the crust in all events, which supports the shallow, <4 km centroid depths from our body-wave solutions. In addition, the magnitude and location of slip in the top km of the model agrees well with our independent estimates from the field (Figs. 6, 3), which we take as validation of our choice of model geometry, given the complexity of the mapped network of surface ruptures. Our results show that slip from the three M > 6 earthquakes on the Vettore and Laga faults (Fig. 7) is spatially complementary and slip is restricted to <∼6 km depth, first-order observations consistent with the results of lower spatial-resolution seismological models (Chiaraluce et al., 2017;Pizzi et al., 2017). However, whilst previous geodetic models of the three events (Cheloni et al., 2017;Xu et al., 2017) also show this same general interdependence of slip, our results for the Visso and Norcia earthquakes show some differences to these studies. Namely, both these previous studies show significant (>60 cm) slip in the Norcia earthquake at depths 6-10 km on the southernmost segment of the Vettore fault, whereas both our geodetic and bodywave results suggest no significant slip took place below ∼6 km in the Norcia earthquake. Our model also includes slip on several minor but important faults associated with the Vettore fault; the inclusion of slip on these faults affects the recovered distribution of slip on the main Vettore fault in the Norcia and Visso earthquakes (Supplementary Figs. S13-S24). Our Pian Piccolo and Norcia Antithetic faults are similar to two additional faults proposed by Cheloni et al. (2017). Our Pian Piccolo fault has a similar strike to the model fault that Cheloni et al. (2017) infer to be a reactivation of the Sibillini Thrust, but in our model this oblique structure has a steeper dip and projects to the surface ∼2 km further to the N (Fig. 3c). However, despite these differences in detail, both our studies recover the same oblique sense of slip on these two antithetic and oblique structures. Differences between our results and those from previous studies arise due to several factors. As well as differences in geometry of the major faults, our inversion technique (Floyd et al., 2016), has enabled us to jointly invert all geodetic datasets throughout the sequence for mutliple slip events on a single fault geometry, remov-ing the need to discard geodetic datasets that contain combined coseismic signals from the Visso and Norcia earthquakes (e.g. Xu et al., 2017) or make assumptions regarding the spatial separation of these signals (e.g. Cheloni et al., 2017). This inversion approach has enabled us to take advantage of an extra 4-5 Sentinel-1 interferograms spanning either or both of the Visso and Norcia earthquakes than are not used in previous studies (Cheloni et al., 2017;Xu et al., 2017). In addition, these extra interferograms, along with additional unique short-baselines GNSS data and other geological and geophysical constraints on the sequence (field mapping, bodywave mechanisms) have meant we are able to reasonably add several smaller faults into our model, as these are required to explain the new near-field datasets. Why does rupture stop? The clear spatial interdependence of slip from the three M > 6 earthquakes on the Vettore and Laga faults (Fig. 7) has been used to suggest that structural complexity along the main Vettore fault strand may have influenced evolution of the sequence, with previous studies focusing on the termination of the Amatrice earthquake (Chiaraluce et al., 2017;Cheloni et al., 2017;Pizzi et al., 2017;Mildon et al., 2017). Here we use our new results to investigate this idea further and propose that the intersection of several oblique and potentially seismogenic faults with the Laga-Vettore system halted rupture for each earthquake in the sequence, therefore determining their respective magnitudes and preventing cascading rupture in a single earthquake of M w 6.7 or larger. Active normal faults in the central Apennines are relatively young, having initiated within the past 2-3 million yr, and as a result the faults are segmented with lengths generally <20 km (e.g. Roberts and Michetti, 2004). Boundaries between neighbouring fault segments are known to commonly arrest through-going rupture in earthquakes (e.g. Biasi and Wesnousky, 2016). The primary structural trend in the region of the 2016 earthquake sequence relates to the main NW-SE striking normal faults. However, there is also a secondary oblique structural trend represented by NNE-SSW to NE-SW striking faults, many of which are currently active as normal faults, that either formed in the current extensional tectonic phase or are reactivated thrusts as-sociated with the previous compressional regime (e.g. Pizzi and Galadini, 2009;Coltorti and Farabollini, 1995;Civico et al., 2017). These have been suggested to act as structural barriers to rupture on the main NW-SE striking normal faults they crosscut or intersect (e.g. Pizzi and Galadini, 2009), by forcing segment boundaries on the major faults. We note linear features along this NNE-NE regional trend in both the relocated aftershocks (Chiaraluce et al., 2017, Fig. 3e) and geomorphology of the Vettore basin (Fig. 3c, d). The Piano Grande basin is rhomb shaped and bounded to the north and south by topographic ridges striking approximately 040 • (NE-SW). There are also sharp changes in slope observed at the northern and southern sides of the Piano Grande and Pian Piccolo basins, suggesting that oblique faults may have some control on the morphology of this elevated plain (Fig. 3d). This is supported by previous geological mapping which has identified normal faults bounding these basins to the north and south (Coltorti and Farabollini, 1995). The relocated aftershocks show predominant alignment along a similar, but slightly different NNE-SSW 015 • trend (Fig. 3e). Projecting aftershocks of the Amatrice earthquake onto the model Vettore fault along NNE or NE trends, we find lineations of aftershocks separate the hypocentres and major rupture extents of all three earthquakes (Fig. 7a, c). These aftershock alignments predate the Visso and Norcia earthquakes, so we interpret them as intersections between the oblique faults and the main Vettore-Laga fault system. The Amatrice earthquake initiated on the Laga fault and rupture propagated northwards onto the southern Vettore fault (Tinti et al., 2016), but the northern termination of significant slip in Fig. 8. Coulomb stress change on the Vettore-Laga fault system calculated prior to the Visso earthquake (a), prior to the Norcia earthquake (b) and following the Norcia earthquake (c). Warm colours indicate fault regions that have been brought closer to failure, cool colours indicate regions taken further from failure. White stars represent the hypocentres for the three main earthquakes as in Fig. 6. the Amatrice earthquake is bounded closely by the intersection of the NW dipping Pian Piccolo normal-oblique fault with the Vettore fault (Fig. 6a). Slip is needed on a structure with this geometry to explain the surface displacement during the Norcia earthquake (e.g. Fig. 5e) and we see evidence for it from relocated aftershocks (Chiaraluce et al., 2017, Figs. 6a, 7c) and in the large-scale geomorphology of the Piano Grande basin, as discussed above (Fig. 3d). The presence of a specific fault in this location from the surface geology is debated (Coltorti and Farabollini, 1995;Pierantoni et al., 2013), but we also note that some oblique faults in this region are poorly expressed at the surface and may only be revealed at depth by geophysical surveys (Civico et al., 2017). Some authors have suggested that the Sibillini Thrust, with a similar geometry to our Pian Piccolo fault but a much shallower dip (Fig. 3c, Pizzi and Galadini, 2009), is instead responsible for halting the northwards rupture of the Amatrice earthquake (Chiaraluce et al., 2017;Cheloni et al., 2017;Pizzi et al., 2017). We favour our interpretation of a steep structural barrier at depth over a shallowly dipping planar barrier (e.g. Cheloni et al., 2017, Fig. 7b dashed grey line) on the basis of the aftershock data, the geomorphology and our new Sentinel-1 interferograms. However, our Pian Piccolo fault may well join with a steepened (∼40 • dipping) lateral ramp of the Sibillini Thrust at depth (e.g. Pizzi et al., 2017, Fig. 7b solid grey line) and these two scenarios are likely indistinguishable in our data. In either case, the key interpretation remains the same: a similar steep structure is inferred to have stopped northwards rupture of the Amatrice earthquake. Similarly, slip in the Visso earthquake is closely bounded up-dip and at its abrupt southern termination by two lineations of aftershocks, which we propose represent an unnamed fault with strike NNE and dip ∼55 • to the east (dashed white line on Fig. 1, dashed black line on Figs. 3, 5, 7, 9b) and a possible small conjugate fault with apparent dip to the NW (Fig. 7, dashed purple line). These same suggested oblique faults also appear to constrain the overall pattern of major slip in the Norcia earthquake (Fig. 7a, b). The structural control here is twofold: as before the faults may directly act as barriers to rupture, but in addition the limitations they place on slip in the two previous events leaves stress shadows (Fig. 8b), which also act to constrain the slip in this last event. In particular we suggest this indirect structural control plays an important role for the southern-termination of major slip in the Norcia earthquake; the Pian Piccolo fault appears to have not stopped rupture during the Norcia earthquake and the slip maxima in this event instead terminates against the slip maxima of the Amatrice earthquake, which acts as a stress-shadow (Figs. 8b, 7). On a larger scale, it appears that the intersection of the Norcia Antithetic fault with the Vettore-Laga system may also have restricted slip in all three earthquakes to shallow depths of <∼6 km. If this sequence ruptured only the shallower half of the Vettore-Laga system in ∼12-15 km thick seismogenic crust (Chiarabba and De Gori, 2016), this could suggest that the deeper portion (depths >6 km) of the fault system is able to fail independently, and crosscutting structures may result in depth segmentation (e.g. Elliott et al., 2011) as well as along-strike segmentation of seismic slip. Why does rupture start again? Using our model slip distributions, we calculate the changes in Coulomb stress (King et al., 1994;Lin and Stein, 2004) on our model faults throughout the 2016 seismic sequence (Fig. 8; Supplementary Fig. S9), caused by the slip in each of the 4 modelled time intervals (Fig. 5g). We also include Coulomb stress changes from recent events prior to 2016: the 1997 Colfiorito earthquakes and the 2009 L'Aquila earthquake. Slip distributions for the 1997 and 2009 events are constrained by an inversion of ERS data for the Colfiorito earthquakes ( Supplementary Fig. S25) and a previ- Fig. 9. Spatio-temporal evolution of aftershocks on the Vettore fault before the Visso earthquake. (a) Time evolution of aftershocks. Aftershocks following the Amatrice earthquake (Chiaraluce et al., 2017) are plotted in grey showing distance in the Vettore fault plane in the direction of the red arrow in (b). The coloured boxes contain the most distant 20% of aftershocks for progressive time-periods, and the triangles show the median distance within each box. The star shows the location and time of the Visso hypocentre, and the black dotted, solid and dashed curves correspond to diffusive models with f = 0.18 and K = 4.8, 3.8 and 2.8 m 2 /s respectively. (b) Postseismic slip estimated from our geodetic model on the Vettore-Laga fault system, shown as in Fig. 6. The red arrow shows the direction in which distance is calculated in (a), and the coloured aftershocks correspond to those shown in (a). Intersections of model and inferred faults are shown as black solid and dashed lines as in Fig. 7. ously published slip distribution for the 2009 L'Aquila earthquake (Walters et al., 2009). Coulomb stress change ( CFF) is defined as: where τ is the change in shear stress, σ n is change in normal stress and μ is the effective coefficient of friction. Stress changes were resolved in the direction of slip of each fault patch. Where a fault patch did not slip in one of the time intervals, stress changes were resolved onto a rake of −90 • . Coulomb stress change calculations were performed using the Coulomb 3.2 software (Lin and Stein, 2004) and a value of 0.4 was used for μ , with elastic parameters kept the same as for our geodetic and seismological inversions. The Norcia hypocentre was brought closer to failure by 1.7 ± 0.18 MPa by all previous events, and given the short interval between the Visso and Norcia earthquakes, it is likely that static stress interactions brought the Norcia hypocentre to the brink of failure, precipitating its rupture 4 days later. The Visso hypocentre was brought closer to failure by 1.2 ± 0.55 MPa by the Amatrice earthquake and subsequent afterslip (Fig. 8a). Static stress transfer alone might be able to explain the two-month delay between the Amatrice and Visso earthquakes, with delayed failure triggered by a rate-and-state frictional response (e.g. Kroll et al., 2017). However, we also find a northwards progression of aftershocks along the Vettore fault during these two months, which reaches the Visso hypocentre at the time of the earthquake (Fig. 9a, b). Northwards aftershock migration and triggering was inferred to be associated with fluid diffusion following the two largest previous earthquakes in this region (Miller et al., 2004;Malagnini et al., 2012), so we therefore also investigate this possibility in the following section. Temporal migration of aftershocks In order to investigate the spatio-temporal pattern of aftershocks in the interval between the Amatrice and Visso earthquakes, we take the earthquakes in this interval, projected onto the Vettore-Laga model fault (Fig. 7a) and first apply a time-varying filter to remove earthquakes with magnitude below the magnitude of completeness (estimated using the goodness-of-fit method, Fig. S5 in Chiaraluce et al., 2017). We calculate the distance from the location of peak slip in the Amatrice earthquake to all aftershocks to the north, in the plane of the model Vettore fault and in a direction approximately perpendicular to the intersection of the Pian Piccolo fault with this plane. This is also approximately parallel to (and directed updip along) the eastward dipping lineation of aftershocks seen in Fig. 7a. We plot this distance versus the timing of aftershocks following the Amatrice earthquake in Fig. 9a. We split these data into four successive time-intervals, each containing 150 earthquakes, and find the 20% 'most-distant' aftershocks for each interval. We consider these earthquakes to represent the leading-edge of any aftershock propagation, and see a clear temporal trend. We see no such pattern if we repeat the analysis to the south. This up-dip, northwards-only trend resembles that seen in studies following the nearby 2009 L'Aquila earthquake (Malagnini et al., 2012). Plotted spatially on the fault plane (Fig. 9b), the aftershocks appear to be propagating along the minor antithetic fault that ruptured in the Norcia earthquake, and the eastward dipping structure that we infer acted as a barrier to rupture in the Visso earthquake. This aftershock migration reaches the Visso hypocentre at the approximate time of the earthquake (Fig. 9a, b), suggesting a possible underlying triggering mechanism for the Visso earthquake. Northwards aftershock migration following the two largest previous earthquakes in this region was suggested to be driven by diffusion of over-pressured fluids from the region of mainshock rupture (Miller et al., 2004;Malagnini et al., 2012). We find the temporal evolution of aftershocks is also consistent with a similar process. If we plot the median distance of the 20% subset of aftershocks for each time interval, these points are consistent with a diffusive-like temporal trend (Fig. 9a). We consider a simple 1D model of a steady-state source of overpressured pore fluid that diffuses along the fault plane following the Amatrice earthquake (equation (8) in Malagnini et al., 2012). If aftershocks were triggered when the pressure increased by a fraction f of the difference between the overpressured source and the background hydrostatic pressure, then the aftershock sequence should propagate according to: where x is distance, t is time, K is diffusivity and erfc −1 is the inverse complementary error function. Varying f and K , we find that forward models with K varying between 2.8 and 4.8 m 2 /s and f = 0.18, corresponding to an 18% increase in pressure above the background, reasonably fit the data (Fig. 9a). We therefore find that the temporal evolution of aftershocks is consistent with a diffusive process and appears to be spatially focussed along the fault intersections described in the previous section. This pattern could also arise from mechanisms other than fluid migration, such as the propagation of afterslip. However, we note that the magnitude of postseismic slip in this interval is predominantly zero within uncertainty bounds (Fig. 9b, Supplemen-tary Fig. S6), and, in addition, aseismic-slip driven migration is typically over two orders of magnitude faster than the rates found here (Roland and McGuire, 2009). We highlight that more detailed analysis of additional geodetic data should be undertaken to fully rule out this alternative hypothesis: slip in the postseismic interval has high uncertainty here as it is constrained by three interferograms only, all of which also include coseismic signals (Fig. 5g). If we do consider this process as driven by diffusion of porepressure CO 2 or water along the fault intersections, then we can estimate a permeability for these regions using our diffusivity estimates and equation (3) in Townend and Zoback (2000), keeping the same values for lithological and fluid parameters suggested in Malagnini et al. (2012). We obtain a value of 2.4 to 4.1 ×10 −14 m 2 , which is consistent with fractured bedrock limestone and previous estimates of fluid permeabilities along nearby faults (Townend and Zoback, 2000;Miller et al., 2004;Malagnini et al., 2012). Irrespective of its exact nature, we suggest that the process driving the northwards propagation in aftershock activity through time brought the Vettore fault significantly closer to failure as it traversed the ∼12 km towards the Visso hypocentre over an interval of approximately two months. However, this diffusive process evidently did not trigger failure of the intervening Norcia segment. We suggest that if the process was fluid-driven and constrained to the fault intersections, it could have bypassed the Norcia hypocentre (Fig. 9b). This may explain why the seismic activity in the sequence jumped from the southern to northern ends of the Vettore fault before finally rupturing the central section. Implications for multi-fault failure, seismic sequences and seismic hazard Structural complexity in fault networks (including gaps, bends, stepovers and intersections between faults) sometimes appears to halt rupture propagation during earthquakes and sometimes permits through-going rupture, allowing large multi-segment earthquakes (e.g. Biasi and Wesnousky, 2016). However, whilst numerical models of dynamic rupture support this role of structural barriers (e.g. Oglesby, 2008), palaeoseismological records cannot determine the relative importance of these features in halting real earthquake ruptures, with respect to the effects of the unknown distribution of pre-earthquake stress across fault networks. Our results from the Central Italy seismic sequence provide important real-world constraint on this problem, simply because we can consider the sequence as a failed multi-segment earthquake. As the different fault segments in our case were necessarily all near-critically stressed at the beginning of the sequence, our study clearly demonstrates the importance of pre-existing structure in stopping small earthquakes from becoming larger ones. Since the static stresses involved in eventual triggering of the Norcia earthquake are significantly smaller than stresses at the crack tip during dynamic rupture, it is likely that the Laga-Vettore fault system would have ruptured in a single large earthquake if pre-existing structure had not arrested slip. However, it is also important to note that despite this clear structural control for most of the seismic sequence, the Norcia earthquake appears to have ruptured through a barrier that halted the Amatrice earthquake. Our study therefore highlights that structural barriers appear to play a vital but enigmatic role in determining whether a large earthquake or a seismic sequence occurs on a segmented, critically stressed fault system. Our results also suggest that these same structural barriers may have controlled the order and timing of earthquakes throughout the subsequent seismic sequence. We suggest that not only did the migration of pressure-driven fluids determine the temporal delay between the Amatrice and Visso earthquakes, but that the channelling of fluids along fault intersections caused the 'out-of-sequence' failure of the northern Vettore fault before the central portion. Fluid-driven migrating clusters of seismicity are thought to be a common tectonic process across a range of tectonic regimes and environments (Vidale and Shearer, 2006), with fluids preferentially migrating along relatively high-permeability faults, and triggering seismicity due to increased pore-pressure. This process is likely to be more common in normal-faulting regions such as Central Italy (Chen et al., 2012) than in other tectonic environments, and has been proposed to play a role in other recent seismic sequences in the region (e.g. Miller et al., 2004). Channelling of fluids along fault intersections is also to be expected. Significant variability in permeability is likely even across a single fault plane, and fault step-overs and transfer zones, intersecting oblique and antithetic faults are all thought to act as higher-permeability conduits (Sibson, 1996), focussing flow in these regions that are typically highly fractured and critically stressed in the long-term. This is supported by an investigation of over 200 geothermal systems in the western US (Faulds et al., 2011); over 50% of such sites are located on structural complexities in a region of active normal faulting. Conversely, only 6% of sites are located in mid-segments or at the point of maximum longterm displacement on major faults, implying low-permeabilities in these regions, possibly due to thick layers of clay gouge. This may also explain why migrating fluids bypassed the region of the Norcia earthquake hypocentre, which was located in a fault mid-segment away from the inferred high-permeability fault intersections. We suggest here that intersecting structures in fault networks may play an important dual role in controlling the mode and timing of multi-segment failure: first acting as barriers to rupture and determining whether multiple segments fail together as one big earthquake or separately as several; and second in controlling the timing and order of subsequent earthquakes if failure does occur in a protracted seismic sequence. Finally, we should also consider the possibility that these same processes operate on much larger spatio-temporal scales. The 2016 Central Italy seismic sequence may be part of a multi-decadal sequence of clustered seismic activity along with nearby earthquakes in 1997 and 2009 (Salvi et al., 2000;Walters et al., 2009). It has been suggested that previous decadal 'super seismic sequences' occurred in the Apennines in the 15th and 18th centuries (Chiarabba et al., 2011;Wedmore et al., 2017). We therefore may need to reconsider traditional concepts of the earthquake cycle, to include protracted coseismic periods that span decades rather than seconds. Conclusions The 2016 Central Italy earthquake sequence highlights the influence that structural complexity in fault systems may have in controlling and segmenting the rupture of critically-stressed faults, particularly in continental fault networks. Intersecting faults can act to limit rupture, but it is unclear under what conditions this will occur, and our results reinforce the recent suggestion that the final magnitude of a complex-rupture earthquake cannot be determined until rupture has stopped (Wei et al., 2011). In addition, these same structural barriers may play a dual role, also controlling the timing of failure in seismic sequences by channelling pressuredriven fluid flow along fault planes. The 2010 El Mayor-Cucapah earthquake in Baja California represents an important counterpoint to the Central Italy sequence. Despite the difference in tectonic region and style of faulting, both episodes of strain release comprised multiple sub-events, taking place on a complex network of fault segments (Wei et al., 2011) and both featured fluid-driven migration of earthquakes following the initial onset of seismicity (Ross et al., 2017). However, at El-Mayor Cucapah, the fault network failed as a single earthquake, whereas in Central Italy, failure occurred sequentially over several months. In both cases, complexity of fault structure is key to understanding the pattern and evolution of seismic strain release. A better understanding of the factors that may halt rupture, and the range of dynamic processes that may lead to cascading rupture over timescales ranging from seconds to years is critical for improving our ability to predict whether faults will fail in large, complex, earthquakes, or in temporally-distributed seismic sequences of multiple large events -two end member scenarios with very different implications for seismic hazard.
2018-12-06T12:28:17.065Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "d44891bbcb62460a858635c86884ba04b8c91404", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.epsl.2018.07.043", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "aef9d280348e88ec784d4eac47387dd5002d1c5f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
14907198
pes2o/s2orc
v3-fos-license
Quantum Schwarzschild space-time Using new approach to construction of space-times emerging from quantum information theory, we identify the space of quantum states that generates the Schwarzschild space-time. No quantisation procedure is used. The emergent space-time is obtained by the Poincar\'{e}-Wick rotation and Fronsdal embedding of certain submanifold of the riemanian manifold of six-dimensional strictly positive matrices with the Bogolyubov-Kubo-Mori metric. Introduction One of us (RPK) has recently proposed a new approach to the problem of unification of quantum theory with general relativity theory. Its key idea is to "general relativise quantum theory" instead of "quantising general relativity". The main motivation are the important conceptual and mathematical problems of main approaches to "quantisation of gravity", as well as the belief that quantum theory (and its unification with general relativity) requires solid conceptual and mathematical foundations free of the concept of quantisation and free of perturbative expansions. The main tool allowing to develop this idea is the new approach to foundations of quantum theory, proposed in [18,19]. According to it, the kinematics of quantum theory is a direct extension of probability theory to the regime where measures on commutative boolean algebras are replaced by integrals on non-commutative algebras. The novel and key mathematical aspect is provided by the use of the Falcone-Takesaki non-commutative integration theory [4], which allows to construct new mathematical framework for quantum theory without relying on Hilbert spaces or measure spaces in foundations. The novel and key conceptual aspect is provided by replacement of the orthodox linear geometry of Hilbert spaces by the non-linear quantum information geometry of spaces of integrals on non-commutative W * -algebras. The striking feature of this geometry is that it reduces in special cases to projective (norm) geometry of complex Hilbert spaces and to riemannian geometry of smooth differential manifolds. More generally, the new kinematics of quantum theory consists of two levels. The 'non-linear' level consists of quantum models M(N ), defined as subsets of positive part of the Banach predual N * of a non-commutative W * -algebra N , equipped with a non-linear quantum information geometry (which is determined by some choice of such geometric entities on M(N ) as quantum relative entropy, riemannian metric, affine connection, etc.). The 'linear' level consists of representations of this geometry in terms of linear non-commutative L p (N ) spaces. In particular, the L 2 (N ) space can be naturally equipped with an inner product, which makes it isometrically isomorphic to the Hilbert space H (of Haagerup's standard representation). This allows for a recovery of kinematics of the orthodox approach to quantum theory as a special (self-dual) linear representation of the generically non-linear kinematics of M(N ). This foundational framework for quantum theory offers new answers to the question "how to reconcile quantum theory with general relativity?", leading to a new approach to the problem of "quantum gravity". The quantum model M(N ) together with its quantum information geometry is considered as the main underlying kinematic object of the theory, while the space-time geometry is considered as an emergent entity that encodes some part of the quantum information geometry of M(N ). The particular form of quantum information geometry of M(N ) depends on the definition of the experimental situation that is subjected to description and prediction in terms of this quantum theoretic model. Because our subject of consideration is "quantum gravity" understood as a "general relativised quantum theory", we will constrain the discussion of geometric structures on M(N ) to those that allow to determine a particular quantum riemannian manifold (M(N ), g). The particularly important examples include: 1) the riemannian geometry canonically derived from the Norden-Sen geometry (riemannian metric g and a pair of affine connections that are mutually conjugate with respect to g) that is derived from differentiation of the single quantum relative entropy functional on M(N ); 2) the solution of some variational equation determining the riemannian metric on M(N ); 3) the riemannian geometry with such riemannian metric that is invariant with respect to the action of a given group G on M(N ). (Note that the last example can be considered as an extension of the theory of W * -dynamical systems, allowing for more detailed analysis and specification of the spaces of states of those systems.) While using any of the above methods, one can observe an interesting feature of quantum information geometry: the points of quantum information geometric manifold M(N ) have internal structure, and the behaviour of smooth differential objects on M(N ) depends on this structure. For example, the particular functional form of the differential geometric objects on M(N ) depends on the choice of a functional representation of M(N ) in some linear space. As a result, the additional degree of freedom is introduced into differential geometric discussions: not only the freedom of choice of representation in terms of non-linear coordinate systems on M(N ), but also the freedom of choice of functional representation in terms of operators on some linear (typically Hilbert) space. For example, if M(N ) is represented as a space of non-normalised strictly positive matrices over C, then one can vary the dimension of the representation, while keeping the same dim M(N ). Hence, if one wants to identify the class of quantum riemannian manifolds (M(N ), g) such that certain geometric quantity A(g) constructed from g satisfies some condition C(A(g)), then this problem might be solved either by varying internal structure of M(N ) for a fixed functional form of g, or by varying the form of g for a fixed M(N ), or by varying both these objects. The particular constraints of variation of these objects have to be determined by precise specification of the corresponding experimental situation. If the particular quantum riemannian manifold (M(N ), g) is selected, then the quantum space-time can be obtained from it by the Poincaré-Wick rotation of riemannian metric g to the lorentzian metricg. This requires to specify globally defined smooth field e of differential one-forms, and provide a decomposition whereĝ is a riemannian metric orthogonal to e⊗e. The Poincaré-Wick rotation amounts to substitution of riemannian metric g by the lorentzian metric The smooth vector field Z defined byg(Z, ·) = e is naturally timelike (g(Z, Z) < 0), so the lorentzian manifold (M(N ),g) is time-oriented. Thus, according to the standard definition [2], it is a space-time. We will call pairs (M(N ),g) quantum space-times. The recovery of the ordinary space-times from quantum space-times amounts to forgetting about the internal structure of the points of quantum model M(N ). This can be formalised by introducing the forgetful functor DeQuant lor,to from the category QMod lor,to of time-oriented lorentzian quantum models (M(N ),g) with isometric embeddings as arrows to the category SpaceTime of space-times with isometric embeddings as arrows. Note that it is possible to apply the steps of the Poincaré-Wick rotation and dequantisation in the reverse order, without changing the result of the procedure. In such case, however, the notion of quantum space-time does not appear. Moreover, the problem of analytic continuation in time variable seems to find much better environment on the level of quantum models. Hence, if some additional structures (such as glbal hyperbolicity) are also required to emerge from information geometry of quantum models, then it seems reasonable to leave forgetful dequantisation as a last step of the 'space-time emergence' procedure. The goal of this paper is to use the above general framework to construct a family of quantum models M(N ) that generate a particular (Schwarzschild) class of space-times, for a particular (Bogolyubov-Kubo-Mori) class of a quantum riemannian metrics g on M(N ). We begin by constructing a family of quantum models with elements belonging to the space of two-dimensional non-normalised strictly positive matrices that corresponds to three-dimensional flat euclidean space. Then we glue them to obtain the class of quantum models corresponding to six-dimensional euclidean space. Next, we chose a global smooth one-form field e, and provide the Poincaré-Wick rotation of g with respect to e, which results in the flat quantum space-time (M 6 ,g). Finally, we use the Fronsdal embedding [5] of the Schwarzschild space-time to six-dimensional flat space in order to specify a manifold of quantum states that determines the Schwarzschild space-time (M S ,g| M S ). 2 Riemannian BKM manifolds of quantum states Finite dimensional quantum models over type I factor algebras If the W * -algebra N contains no type III factor and if N + * contains at least one faithful element ω (i.e., ω(x * x) = 0 ⇒ x = 0 ∀x ∈ N ), then M(N ) can be represented as a space M(H ω ) of positive operators over Hilbert space H ω . The space H ω , as well as the representation π ω : N → B(H ω ), are uniquely constructed from a pair (ω, N ) by means of the Gel'fand-Naǐmark-Segal (GNS) construction [6,31]. If dim M(H ω ) = d < ∞, then M(H ω ) is just a subspace of the space M d (C) + of d-dimensional non-normalised density operators (positive matrices). Note that the 'reference' faithful quantum state ω ∈ N + * is not required to belong to M(N ). The space L 1 (N ) is always isometrically isomorphic to the Banach predual N * of N , while the space L ∞ (N ) is always isometrically isomorphic to N itself. If N is a type I factor, then π ω (N ) ∼ = B(H ω ), and the Falcone-Takesaki non-commutative L p (N ) spaces turn to the spaces L p (B(H ω ), Tr) of p-th Schatten-class operators, where Tr is a canonical trace on B(H ω ). In particular, the L 2 (B(H ω ), Tr) space is just a Hilbert space H HS with the Hilbert-Schmidt scalar product A, B HS := Tr(B * A) and vectors A, B provided by the elements x of B(H ω ) satisfying (Tr(x * x)) 1/2 < ∞. The embeddings of quantum models M(N ) into non-commutative L p (N ) space is provided, for p ∈ [1, ∞[, in terms of the embeddings When N is a type I factor, this turns to the family of embeddings For p = 2, this turns to the embedding ρ → 2ρ 1/2 of a space of non-normalised density matrices into a Hilbert space H HS . Extension of the above family of embeddings to the p = ∞ case is provided by logarithmic coordinates As showed by Jenčová [16,17], quantum models M(N ) can be equipped with differential manifold structure if all elements of M(N ) are faithful. For type I algebra N and dim M(N ) = d < ∞ this condition amounts to requiring strict positivity of elements of M(H ω ). This restricts considerations to the space M d (C) + 0 of strictly positive d-dimensional matrices. The quantum differential manifold M(N ) can be equipped with various differential geometric structures. In particular, one can consider riemannian metrics on it. Quantum information theory allows to impose some additional conditions on these metrics. The standard condition is the monotonicity of metrical distance d g of g under unit-preserving (T (I) = I) completely positive maps T , Condition (6) can be interpreted as a requirement that the loss of information content of quantum states should not lead to increase of their distinguishability. This condition selects a wide class of the Morozova-Chentsov-Petz quantum riemannian metrics [24,27,10]. An additional condition that g should allow a pair (∇, ∇ ) of Norden-Sen conjugate affine connections [25,32], selects for dim M(N ) < ∞ a family of γ-metrics, where γ ∈ [0, 1] for M(N ) = N + * [13,12,7], and γ ∈ {0, 1} for M(N ) = N + * 1 := {ω ∈ N + * | ω(I) = 1} [9,8]. For γ ∈ {0, 1}, the γ-metrics are known as the Bogolyubov-Kubo-Mori (BKM) metrics [1,21,23], while for γ ∈]0, 1[ they are known as the Wigner-Yanase-Dyson (WYD) metrics [35]. All γ-metrics, together with their corresponding Norden-Sen dually flat pairs of affine connections (∇ γ , ∇ 1−γ ), can be derived, for dim M(N ) < ∞ and M(N ) ⊆ N + * 1 , by differentiation of the Hasegawa relative entropy This derivation is provided by [14,3,11,22,15] where (∂ u ) φ is a directional derivative at φ in the direction u ∈ TM(N ). In particular, the BKM metric follows from differentiation of the Umegaki relative entropy [34] which is a γ → 1 limit of a Hasegawa relative entropy. Taking into account the key role played by the Umegaki relative entropy in quantum information theory (as opposed to other Hasegawa relative entropies), as well as the uniqueness of the BKM metric as the only monotone riemannian metric with flat Norden-Sen dual connections on the space of normalised quantum states [8,9], we will restrict our considerations to the BKM quantum riemannian metrics. It is important to note that the vectors of tangent space T φ M(N ) admit different representation in terms of L p (N ) spaces, corresponding to the various embeddings (3). Depending on the choice of particular representation of the tangent space of M(N ), the particular quantum riemannian metric g can take different functional forms. Equation (8) shows that the choice of a particular γ-metric leads to a natural choice of a preferred pair of coordinate systems, associated with a preferred non-commutative L p (N ) space representation via p = 1/γ. For this reason, we will consider the BKM metric expressed in terms of the logarithmic coordinates. The logarithmic representation of the BKM metric The mapping is a diffeomorphism acting on a space M d (C) + 0 of d-dimensional strictly positive matrices to a space M d (C) sa of d-dimensional hermitean matrices. In what follows, we will consider the submanifolds of M d (C) sa that are obtained using this mapping. In particular, an d-dimensional submanifold Q d of hermitean matrices corresponds to d-dimensional submanifold exp(Q d ) of strictly positive matrices. If the mapping Using the parametrisation H of Q d defined by (12), we can locally express matrix elements of the metric tensor (13) as This can be simplified to which follows from the equations: Family Q f g of 3-dimensional riemannian manifolds Let and where | x| := (x 1 ) 2 + (x 2 ) 2 + (x 3 ) 2 , and σ a are Pauli matrices. Consider the family Q f g of submanifolds of the space M 2 (C) sa of 2-dimensional hermitean matrices, defined by By definition, F is the largest set of functions (f, g) for which the manifold Q f g is a smooth submanifold of M 2 (C) sa . Every element of the family Q f g is equipped with a natural global parametrisation H f g and is diffeomorphic to R 3 . The manifold Q f g can be parametrised by spherical coordinates (r, θ, φ), which are introduced from R 3 by the inverse mapH −1 f g , wherẽ The non-zero matrix elements of the BKM metric tensor g, calculated in these coordinates using (15) read    g rr = 2e g(r) (2f (r)g (r) sinh f (r) + (f (r) 2 + g (r) 2 )) cosh f (r) g θθ = 2f (r)e g(r) sinh f (r) g φφ = 2f (r)e g(r) sin 2 (θ) sinh f (r). The flatness condition and its particular solution The manifold Q f g is flat if g rr = 1, g θθ = r 2 , g φφ = r 2 sin 2 (θ). In what follows we will construct the solutions (f, g) ∈ F of (23). The above system of differential equations is equivalent to where Q(r) = r 2 f (r) sinh f (r) . The second equation in (24) is quadratic with respect to f (r), so we can replace it by one of the equations of the form where F a for a = ±1 are odd functions that are analytic in some neighborhood of the real line Behaviour of F − and F + is presented on Figure 1. The function F + (f ) has one root in f = 0, while F − (f ) has roots in f = 0 and in f = ±f r (f r > 0). Moreover, lim f →∞ According to first equation of (24), g is unambiguously determined by f , so for (f, g) ∈ F, f has to be invertible. Hence, one can consider r as a function of f . Then (25) is equivalent to The local solution of this equation for f > 0 is given by where a = ±1 while f 0 > 0 and C > 0 are constants. In case of a = −1, we assume also that f For a = +1 the latter conclusion does not hold. From the asymptotic behaviour of F 2 we obtain lim The functions In what follows, we will assume a = −1, because we want the inverse of r a,C,f 0 to be defined globally on R + . We could also choose f 0 < 0, thereby obtaining a solution of (27) for negative f only, which is the mirror reflection of r −,C,−f 0 . It remains to show that (f, g) ∈ F for f = r −1 −,C,f 0 , where C > 0, f 0 ∈]0, f r [ are fixed constants, while . The non-trivial part of the proof amounts to showing that ∀ k∈N f (2k) (0) = 0, g (2k+1) (0) = 0. It is equivalent to the smoothness of manifold in r = 0. Below we outline the necessary steps of the proof: 1. From lim f →0 2. It follows by induction that f (n) (r) has the form 1 r n K n (f (r)) for n = 0, 1, 2, . . ., where K n (f ) are odd analytic functions of f which have zeros of 2 n 2 + 1 order in f = 0. For n = 1 we have K 1 = F − . In the proof of inductive step one needs to use equation (25) for a = 1, the fact that only odd coefficients of power series of K n are non-zero and lim f →0 3. Because of step one and two all derivatives of f exist at r = 0 and fulfill desired properties. 4. The function g is well defined at r = 0 and where is an even analytical function of f which has a zero of second order at f = 0. 5. g (n) (r) has the form 1 r n L n (f (r)) for n = 1, 2, . . ., where L n (f ) are even analytic functions of f which have zeros of 2 n 2 order in f = 0. It follows by induction similarly as in step two (note that L 1 = G). 6. From the two preceding steps it follows that all derivatives of g take finite values at r = 0 and have required properties. We conclude that (f, g) ∈ F, which finishes the construction of a three-dimensional flat manifold Q f g . Construction of the quantum Schwarzschild space-time Now we are ready to construct quantum Schwarzschild space-time, as a particular fourdimensional submanifold in six-dimensional flat manifold of four-dimensional hermitean matrices (corresponding by (11) to the manifold of four-dimensional strictly positive matrices). Let us chose any of the functions f determined in the previous section (this is done by choosing C, f 0 ∈]0, f r [ and setting f = r −1 −,C,f 0 ), and define g(r) = log r 2 2f (r) sinh(f (r)) . Using the map H f g given by (18), we define the following smooth injection Let M 6 := H 6 (R 6 ). The map H 6 is a diffeomorphism between R 6 and M 6 . The space (M 6 , g), where g is a BKM metric on M 6 , is a riemannian manifold isometric (by H 6 ) to a 6-dimensional euclidean space. We define a one-form field by e := dx 1 . As a result of the Poincaré-Wick rotation of riemannian manifold (M 6 , g) with respect to e, we obtain a flat pseudo-euclidean manifold, denoted by (M 6 ,g). The signature ofg is (−, +, +, +, +, +). Now we can use the Fronsdal [5] embedding of Schwarzschild space-time to 6-dimensional pseudo-euclidean space where we implictly use parametrization H 6 of M 6 and introduce function h(y) = y 2m dr[(2mr 2 + 4m 2 r + 8m 3 )/r 3 ] 1/2 . The constant m > 0 is a mass parameter characterising the solution. The space (M S ,g| M S ) is a maximal extension of the Schwarzschild space-time, known as the Kruskal-Szekeres extension [20,33]. Instead of M S , one can also choose the manifold which corresponds to the region of Schwarzschild solution considered originally in [30]. Then the limit m → 0 of M S is just a (flat) Minkowski space-time. Discussion In the preceding section we have shown that the quantum Schwarzschild space-time can be constructed as a result of particular choices of: 1) a manifold of non-normalised strictly positive density matrices, 2) a metric tensor on this space, and 3) global smooth field of one-forms (which defines the time orientation). All these choices are of purely kinematic character. When provided, they establish the emergence of a particular space-time from quantum information data. Recall that quantum models M(N ) can be considered as manifolds if they consist of faithful elements only. In the case of models over finite-dimensional algebras, this is equivalent to the requirement of strict positivity of non-normalised density matrices that form the representation of the quantum model over the GNS Hilbert space. This excludes the possibility of consideration of pure quantum states as elements of quantum manifolds. From the geometric perspective, this can be understood as restriction of considerations to the differential manifolds without boundary, since pure states form a subset of the boundary. Thus, emergent space-times are defined only for mixed states. A point of a quantum Schwarzschild space-time that was constructed in this paper is a fourdimensional strictly positive matrix, which is a direct sum of two two-dimensional strictly positive matrices. However, these matrices are not qubits in the usual understanding of that term, because qubits require an additional normalisation constraint, which is not satisfied by our construction. The normalisation condition reduces the dimensionality of the quantum model, so in order to construct four-dimensional space times based on qubits, one would need to use different representation of quantum models. While the choice of quantum model and its geometry determines to a large extent the corresponding space-time (it remains to chose the global foliation for a Poincaré-Wick rotation, which for some models is naturally suggested by their geometry), the inverse problem of construction of quantum model that generates a particular space-time is generally harder and it might admit many very different solutions. This is also in the case considered in this work: there might exist other quantum information models M(N ) that generate Schwarzschild space-time (either for the same or for some other choice of quantum riemannian metric). The characterisation of all quantum models (M(N ), g, e) that generate Schwarzschild space-time manifold remains an interesting open problem. The choice of the Bogolyubov-Kubo-Mori riemannian metric g on M(N ) is determined, according to (9), by the choice of Umegaki's relative entropy functional on M(N ). Hence, as long as no other principles determining the riemannian geometries on quantum models are considered, this is the most canonical choice (from the perpective of quantum information theory). On the other hand, the particular coordinate system used in the above derivation is by no means unique. It is just a convenient tool to provide calculations required by the use of the Fronsdal embedding. Besides this particular aim, the construction of Schwarzschild space-time based on a family Q f g of manifolds is quite inconvenient. It would be interesting to find some other class of quantum information models generating the Schwarzschild space-time, which could be defined directly in terms of some operational (experimental) constraints. However, this would require to use of some other technique of construction of the four-dimensional manifold. The advantage of the method used in this paper is that it utilises the representation of quantum states in terms of Pauli matrices, what allows a remarkable simplification of the formula for the BKM metric. As a result, the system of differential equations generated by the flatness condition was analytically solvable. This shows that the problem of operational meaning of quantum model that generates a particular space-time is closely related with the method used to introduce a particular riemannian metric g on this model. Putting it more strongly, we think that in order to justify the choice of a quantum model which generates a given space-time, one necessarily has to provide an explicit operational semantics that serves as an environment (operational context) for such choice. Without such environment, it is impossible to identify the operational meaning of the mathematical parameters of the emergent space-times (e.g., the parameter m in the Schwarzschild solution (31)). Both Hilbert space based kinematics of orthodox approach to quantum theory and lorentzian geometry of space-time arise as two representations of the underlying quantum information geometry of M(N ). Both amount to forgetting some part of the structure of the quantum information model. But because they have the same origin, they are mutually related from scratch. As a result, to each quantum space-time (M(N ),g) there is assigned a 'classical' space-time (M c ,g c ) := DeQuant lor,to (M(N ),g), as well as a subset L := 1/2 (M(N )) of a Hilbert space H ∼ = L 2 (N ), such that to each element of M c there corresponds a vector in L ⊂ H. Given a GNS representation π ω : N → B(H ω ) of a finite dimensional algebra N (provided by the choice of some ω ∈ N + * ), the Hilbert space H is unitarily isomorphic to H HS = L 2 (B(H ω ), Tr) and the vectors in L ⊂ H correspond (via the inverse of ρ → 2ρ 1/2 ) to the non-normalised density matrices in B(H ω ). Hence, in particular, a (continuous or discrete) space-time trajectory in M c corresponds uniquely to a family of density matrices represented as a (respectively, continuous or discrete) trajectory of vectors in L ⊂ H. This way the point of a space-time and the density matrix of quantum theory can be considered just as two representations of the single quantum state of information ω ∈ M(N ). The difference between two space-time points can be identified only by specifying some difference between two quantum states of information that define these points. In this sense, the primary property of the space-time event is no longer its location in some causal poset. Causality is just a special, and emergent, case of correlativity: in general, the space-time events are distinguishable only by their correlation contents. If a particular operational semantics defining the geometric data (M(N ), g, e) is provided, then an emergent space-time becomes a purely epistemic entity: its points and its geometry represent only some quantified knowledge, with no ontological (substantial) contents whatsoever. Note that in this paper we discuss only the kinematic aspect of emergence of spacetime from quantum theory. The choice of a particular quantum riemannian metric and global 'temporal' one-form is considered as a part of a definition of quantum kinematics. The dynamical features of the relationship between quantum theory and space-time, including trajectories representing the non-linear quantum dynamics (generated by constrained maximisation of quantum relative entropy [18,19]) and the quantum analogue of the Hilbert-Einstein variational equations, will be discussed elsewhere.
2012-01-11T21:41:58.000Z
2011-10-29T00:00:00.000
{ "year": 2011, "sha1": "5aeed2d4fc938a87e87f103d799248052a1db773", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5aeed2d4fc938a87e87f103d799248052a1db773", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2641964
pes2o/s2orc
v3-fos-license
Semileptonic B decays into even parity charmed mesons By using a constituent quark model we compute the form factors relevant to semileptonic transitions of B mesons into low-lying p-wave charmed mesons. We evaluate the q^2 dependence of these form factors and compare them with other model calculations. The Isgur-Wise functions tau(1/2) and tau(3/2) are also obtained in the heavy quark limit of our results. I. INTRODUCTION Recently, BaBar Collaboration has discovered a narrow state with J P = 0 + with a mass of 2317 MeV, D * s0 (2317) [1]. The existence of a second narrow resonance, D sJ (2460) with J P = 1 + , was confirmed by CLEO [2]. Both states have been confirmed by BELLE [3]. Soon after the discovery, another set of charmed mesons, D * 0 0 (2308) and D ′0 1 (2427) which have the same quantum numbers J P = (0 + , 1 + ) as D sJ has been discovered by BELLE [4]. Before their discovery, quark model and lattice calculations predicted that the masses of these states, in particular D * s0 (2317) and D ′ s1 (2460), would be significantly higher than observed [5], [6]. Moreover, these states were predicted to be broad due to the fact that they can decay into D K and D * K, respectively. Experimentally, the masses of D * s0 (2317) and D ′ s1 (2460) are below the D K and D * K thresholds and hence they are very narrow. These facts inspired a lot of theorists to explain the puzzle [7]. In this paper we will focus our attention on the weak semileptonic transitions of B mesons into lower lying p-wave charmed mesons (D * * ). These transitions were studied, within a quark model approach, for the first time in [8] and, more recently, in [9] where the authors take into account the symmetries of QCD for heavy quarks [10], already used in [11]. The light-front covariant model [12] was adopted to study the same subject in [13]. The relevant form factors were also evaluated, in the framework of QCD Sum Rules [14], in [15]. Here we employ a simple constituent quark model [16,17] to evaluate semileptonic form factors of B mesons into p-wave charmed mesons. The plan of the paper is the following. In the next section we describe our quark model; the third section is devoted to introduce and evaluate the s-wave to p-wave form factors. Our way to fix the free parameters of the model and the resulting form factors are discussed in section four, while in section five the heavy quark limit of the form factors are computed and compared with Heavy Quark Effective Theory predictions; the τ 1/2 , and τ 3/2 are also evaluated. In the last section we show and discuss our numerical results. II. A CONSTITUENT QUARK MODEL In our model [16,17] any heavy meson H(Qq), with Q ∈ {b, c} and q ∈ {u, d, s}, is described by the matrix where m Q (m q ) stands for the heavy (light) quark mass; q µ 1 , q µ 2 are their 4−momenta (cfr Fig. 1). ψ H (k) indicates the meson's wave function which is fixed by using a phenomenological approach. The meson constituent quarks vertexes, Γ in Eq. (1), are fixed by using the correct transformation properties under C, P and to enforce the relation For the odd parity, s-wave heavy mesons J P = (0 − , 1 − ), Γ is given by where ε is the polarization 4−vector of the (vector) meson H. The vertexes of the lower lying even parity heavy mesons, instead, are given by the matrices 1 As already discussed in [16,17], the 4−momentum conservation in the meson-constituent quarks vertexes can be obtained defining a heavy running quark mass. For details we address the reader to the references [16,17]. Here, for the sake of utility, we recall all the remaining rules of our model for the evaluation of the hadronic matrix elements of weak currents: a) for each quark loop with 4-momentum k we have a colour factor of 3 and a trace over Dirac matrices; b) for the weak hadronic current, q 2 Γ µ q 1 , one puts the factor where with Γ µ we indicate a combination of Dirac matrices. III. FORM FACTORS In this section we evaluate the form factors parameterizing the 0 − → (0 − , 1 − ) and 0 − → (0 + , 3 P 1 , 1 P 1 ) weak transitions. The decomposition of these matrix elements of weak currents in terms of form factors are the following (see also [8]) The calculation of the form factors in Eqs. (10) and (11) for the case of B → D(D * ) transitions has been done in Ref. [17]. However, for the sake of utility, the analytical expressions are reported in appendix A. One of the main results of this paper is the calculation of the form factors appearing in Eqs. (12), (13) and (14). By way of an example, in the following we describe the calculation of the matrix element give the expressions of the form factors F ± . In appendix B we collect the expressions for G (′) , F (′) and A (′) ± . Note that all the calculations are done in the frame where q µ where D is the integration domain (see Refs. [16,17]) defined by φ and θ are the azimuthal and the polar angles respectively for the tri-momentum k. K M = (m 2 I − m 2 2 )/(2m I ) and m I (m F ) is the mass of the initial (final) meson: in Eq. (15) m F = m 0 + . We choose the z−axis along the direction of q , the (tri-)momentum of the W boson (cfr Fig. 1). The analytical expressions for the form factors can be obtained by comparing Eq. (15) with Eq. (12) where (d ij = m i − m j and s ij = m i + m j ). The expressions for the remaining form factors in Eqs. (10)- (14) are collected in appendix A and B. IV. FIXING THE FREE PARAMETERS The numerical evaluation of the form factors given in Section III requires to specify the expression for the vertex functions and the values of the free parameters of the model. For the vertex functions we adopt two possible forms, the gaussian-type, extensively used in literature (see for example [18]) and the exponential one which is able to fit the results of a relativistic quark model regarding the shape of the meson wave-functions [19]. In our approach ω H is a free parameter which should be fixed by comparing a set of experimental data with the predictions of the model. In this paper we choose to fix the free parameters by a fit to the experimental data on the Br(B → Dℓν) [20] and on the spectrum of B → D * ℓν process [21]. The quality of the agreement between fitted spectrum and the corresponding experimental data may be assessed by looking at the Figure 2. It should be also observed the very small differences between the B → D * ℓν spectrum using the vertex functions in Eqs. (21)- (22). Regarding the B → Dℓν branching ratio, we obtain 2.00 (2.01) % for the exponential (gaussian) vertex function to be compared to the experimental value: Br(B → Dℓν) = 2.15 ± 0.22 (2.12 ± 0.20)% for the charged (neutral) B meson. At this stage the two different form of the vertex functions agree equally well with the experimental data. However, differences emerge when single form factors are considered (cfr for example Table III). V. HEAVY QUARK LIMIT In this section we perform the heavy quark limit for the form factors obtained in the previous sections. Before to do this we briefly remind the implications of the HQET on the heavy meson spectrum. In the quark model, mesons are conventionally classified according to the eigenvalues of the observables J, L and S: any state is labelled with the symbol 2S+1 L J . So, if we consider the lower lying even parity mesons (L = 1), the scalar and the tensor mesons correspond to 3 P 0 and 3 P 2 states, respectively. Moreover, there are two states with J = 1: the 1 P 1 and 3 P 1 , they can mix each other if the constituent quark masses are different as in the case of charmed mesons. For heavy mesons the decoupling of the spin of the constituent heavy quark, s Q , suggests to use a different set of observables: the total angular momentum of the light constituent, j q (= s q + L), the orbital momentum of the light degree of freedoms respect to the heavy quark, L, the total angular momentum J (= j q + s Q ), any state is labelled with L jq J . In this representation the scalar and the tensor mesons are labelled with P and are related to the 3 P 1 and 1 P 1 states by the [11] P 3/2 1 = + 2 3 The scaling laws of the HQET concern the 0 − → (P 23) and (24). For example To extract the heavy quark mass dependence from the expressions of the form factors we follow the same approach used in our previous paper [17]. We introduce the variable x, defined by x = (2αk)/m F , in such a way, neglecting the light quark mass respect to the heavy ones, the integration domain, near the zero-recoil point, simplify to 0 ≤ x ≤ α, 0 ≤ θ ≤ π and 0 ≤ φ ≤ 2π. Therefore, if we look at the expressions of F ± (q 2 ), Eqs. (19)- (20), near the zero recoil point (i. e. q 2 ≃ q 2 max ) we have, neglecting terms of the order of x 3 , For α ≪ 1 the integration can be easily done giving Thus, the τ 1/2 Isgur-Wise function resulting from our model is given by where we have also written the term (w − 1) 2 which was neglected in Eq. (26). A similar analysis can be performed on the heavy to heavy 0 − → P which defines the 0 − → P 1/2 1 form factors. It is very simple to obtain their scaling laws in the limit of heavy quark masses. Following the above method we obtain [24] where N = Similarly, we can evaluate the τ 3/2 Isgur-Wise function obtaining A comparison between our results and some others coming from quark models, QCD sum rules and Lattice calculations can be done looking at the Table II. The values of the τ functions at zero recoil point and their slopes are compatible. In particular, it should be observed that our results for τ 1/2 are practically the same obtained in the Isgur Scora Grinstein Wise (ISGW) model [8] and QCD Sum Rules findings [26,27]. 2 Regarding τ 3/2 , our result at zero recoil point is slightly larger of the results coming from other models, while the slope is comparable with others. Relations between the slope of Isgur-Wise function and τ functions at zero recoil points were derived, in the form of sum rules, by Bjorken [28] and Uraltsev [29] where n stands for the radial excitations and ρ 2 is the slope of the Isgur-Wise function ξ(w) which, in our model [17], is Our results for n = 0 oversaturate both the sum rules. For the Bjorken sum rule this is due to the small value we obtain for the derivative of the Isgur-Wise function (ρ 2 ) which is in any case compatible with the experimental value ρ 2 = 0.95 ± 0.09 [21]. We plan to study this problem in a separate work. However, a detailed discussion on these sum rules and the findings of quark models can be found in [30]. VI. NUMERICAL RESULTS AND DISCUSSION All the results discussed in the previous section has been obtained without fixing the free parameters of the model. In this one we use the fitted values of the free parameters in Table I (cfr section IV for discussion) to obtain the III: B → D * * form factors evaluated at q 2 = 0 and at q 2 max = (mB − mD * * ) 2 by using the vertex function in Eq. (22). In parentheses the values obtained using the gaussian vertex function (cfr. Eq. (21)). Form Factor This work Ref. [8] Ref. [13] GeV [20]. 3 Note that we are considering, for a better comparison with other calculations, the elicity form factors (cfr for definitions, for example, [32]). Looking at the Table III, we can see that the absolute values of our form factors (at q 2 = 0 ) are larger than the ones in Ref [8,13], this naturally implies larger branching ratios in our model. In particular, our predictions on the branching ratios, using the exponential (gaussian) vertex function, are (τ B 0 = 1.536 × 10 −12 s [20]) Regarding the q 2 dependance of the form factors, we find a very good agreement with numerical results assuming 3 D ′ 1 and D 1 represent, respectively, the two different physical axial-vector charmed meson states. The physical D ′ 1 ( D 1 ) is primarily P 1/2 1 (P 3/2 1 ). They differ by a small amount from the mass eigenstates in the heavy quark limit, for a discussion see [31]. In this paper we neglect these differences. the following polar expression the fitted values of a can be found in Table IV. It is interesting to observe that the effective pole mass is not far from the mass of the B c meson. A different q 2 dependence exhibit the form factors F 0 and A 1 ; for them we use the form and the values of b are collected in Table IV. In conclusion we have obtained in a very simple constituent quark model all the semileptonic form factors relevant to the transition of B into the low-lying odd and even parity charmed mesons. The free parameters of the model have been fixed by comparing model predictions with the B → D * ℓν spectrum and B → Dℓν branching ratio. Our numerical results are generally larger than the results of other models. However, form factors reproduce the scaling laws dictated by the HQET in the limit of infinitely heavy quark masses. In this appendix we collect the analytical expressions for the 0 − → 0 − and 0 − → 1 − form factors defined in Eqs. (10) and (11), respectively. g(q 2 ) = k 2 dkdcosθ ψ I (k)ψ * F (k) where d ij = m i − m j and s ij = m i + m j (with m i the mass of i−quark); m I and m F are the masses of the initial and final mesons, respectively. E F (= q 2 + m 2 F ) represents the energy of the final meson. The angle θ is defined in Section III after Eq. (18). In this appendix we give the expressions of the form factors appearing in Eq. (14) (0 − → 1 P 1 transitions). We use the same notations of the previous appendix.
2017-09-24T07:52:37.159Z
2006-07-28T00:00:00.000
{ "year": 2006, "sha1": "36aeac3be98629d02ba4380c2424d7d437f55751", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0607319", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "feebf337c919600a53fad24209b1d61740fdbbd4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246190441
pes2o/s2orc
v3-fos-license
Hurricane Harvey Impacts on Water Quality and Microbial Communities in Houston, TX Waterbodies Extreme weather events can temporarily alter the structure of coastal systems and generate floodwaters that are contaminated with fecal indicator bacteria (FIB); however, every coastal system is unique, so identification of trends and commonalities in these episodic events is challenging. To improve our understanding of the resilience of coastal systems to the disturbance of extreme weather events, we monitored water quality, FIB at three stations within Clear Lake, an estuary between Houston and Galveston, and three stations in bayous that feed into the estuary. Water samples were collected immediately before and after Hurricane Harvey (HH) and then throughout the fall of 2017. FIB levels were monitored by culturing E. coli and Enterococci. Microbial community structure was profiled by high throughput sequencing of PCR-amplified 16S rRNA gene fragments. Water quality and FIB data were also compared to historical data for these water body segments. Before HH, salinity within Clear Lake ranged from 9 to 11 practical salinity units (PSU). Immediately after the storm, salinity dropped to < 1 PSU and then gradually increased to historical levels over 2 months. Dissolved inorganic nutrient levels were also relatively low immediately after HH and returned, within a couple of months, to historical levels. FIB levels were elevated immediately after the storm; however, after 1 week, E. coli levels had decreased to what would be acceptable levels for freshwater. Enterococci levels collected several weeks after the storm were within the range of historical levels. Microbial community structure shifted from a system dominated by Cyanobacteria sp. before HH to a system dominated by Proteobacteria and Bacteroidetes immediately after. Several sequences observed only in floodwater showed similarity to sequences previously reported for samples collected following Hurricane Irene. These changes in beta diversity corresponded to salinity and nitrate/nitrite concentrations. Differential abundance analysis of metabolic pathways, predicted from 16S sequences, suggested that pathways associated with virulence and antibiotic resistance were elevated in floodwater. Overall, these results suggest that floodwater generated from these extreme events may have high levels of fecal contamination, antibiotic resistant bacteria and bacteria rarely observed in other systems. INTRODUCTION Hurricane Harvey deluged the Houston metropolitan area in August of 2017 with over a meter of rain in less than 48 h. This rainfall set a record for the continental United States (Cappucci, 2017), and exposed thousands, perhaps millions, of citizens and first responders to potentially contaminated floodwaters. In rural regions typical of areas north of Houston, flooding of agricultural land could release animal waste associated with areas used for animal grazing (Gentry et al., 2007). In suburban watersheds typical of the greater Houston-Galveston area, rainfall could accelerate the resuspension and transport of waste from onsite sewage facilities, such as residential septic tanks (Morrison et al., 2017). Indeed, waterways in the Houston-Galveston area frequently exceed fecal indicator bacteria (FIB) criteria during high flow and flood events (Petersen et al., 2006;TCEQ, 2013). Little is known about the health risks associated with exposure to sewage and other human waste in floodwaters in urban, industrialized watersheds (Ahern et al., 2005). Human waste presents a particular health threat (Soller et al., 2014) and the perception that floodwater is contaminated with sewage could further alarm and mentally traumatize the public and hamper recovery efforts (Few and Matthies, 2006;Du et al., 2012). These threats to public health are expected to worsen, as several models predict that the intensity, if not the frequency, of tropical cyclones and hurricanes will increase over the next few decades (Webster et al., 2005;Knutson et al., 2008). Extreme weather events could also alter the quality of receiving waters. Flooding can result in release of petroleum products and other hazardous materials that could stress aquatic systems (Rozas et al., 2000;O'Donnell, 2005;Girgin and Krausmann, 2016). This environmental risk is high in the Galveston Bay systems; the Houston Ship Channel is the largest petrochemical complex in the United States (Bridges, 2019). Floodwaters can also temporarily alter nutrient cycles. For example, Hurricane Bob, a category 3 storm when it landed on Cape Cod, increased nutrient loading to estuaries in Cape Cod, Massachusetts, but the system appeared to recover rapidly (Valiela et al., 1998). Hurricane Ivan exacerbated eutrophication in Pensacola Bay, Florida temporarily, but the system recovered in a few days (Hagy et al., 2006). The extent to which these few studies can be extrapolated to other areas with unique geographies reflects the paucity of data and inherent challenges of quantifying multiple stressors during extreme, yet ephemeral, events (Córdova-Kreylos et al., 2006). Metagenomic methods have the potential to provide additional insight the health of aquatic systems, particularly with respect to extreme weather events (Ghaju Shrestha et al., 2017). Here we apply metagenomics to determine the impact of Hurricane Harvey (HH) on the health of Clear Lake, an estuary between Houston and Galveston that connects with upper Galveston Bay. This estuary is popular with anglers and boaters and is routinely monitored by a consortium of state agencies, non-profits and academic institutions. We collected water samples at stations within well-defined water body segments that represent a range of salinity (fresh to brackish) and nutrients. Stations were sampled immediately before and after landfall of Hurricane Harvey, and then weekly into the fall. These samples were analyzed for fecal indicator bacteria (FIB), dissolved inorganic nutrients (DIN) and microbial community structure, as assessed by targeted metagenomic analysis of 16S rRNA gene fragment amplicons. FIB counts, DIN concentrations and other environmental parameters were compared to data mined from public archives. Relative to pre-storm levels, and values typical for waterbodies sampled herein, HH elevated FIB counts and lowered DIN and salinity concentrations. The structure of the community shifted from a community dominated by Cyanobacteria and Actinobacteria before the storm to a community dominated by the phyla Proteobacteria and Bacteroidetes immediately after the event. Shifts in the microbiological community structure corresponded to changes in salinity and NO x concentrations. Sampling Locations and Collection We selected sites around Clear Lake, an estuary between Houston and Galveston (Figure 1), based on the availability of long-term water quality data for these locations and ease of access. Water samples were collected on the afternoon of August 24th, 2017, 1 day before Hurricane Harvey landed in the Corpus Christi area. These samples, designated "pre" in this work, were collected at baseflow conditions (Supplementary Figure 1). A second set of samples was collected on August 30th. These samples, designated "HH" throughout, correspond to Hurricane Harvey samples and were collect hours after flow of a major tributary into Clear Lake peaked (Supplementary Figure 1). We added a second sampling site (N) when collecting the HH set to collect floodwater received by segment 1101C (Figure 1). Starting on September 8th we sampled six times to generate a "post" sample set; all post-HH samples were collected during baseflow conditions (Supplementary Figure 1). Samples collected in August and September were designated as summer season. Samples collected in October were designated as fall season. We also generated a mock sample by mixing raw sewage, collected as described previously (Amaral-Zettler et al., 2008), and surface water collected from station H (Figure 1) in March of 2018. The sewage and water were mixed at a ratio of 1 part sewage with 9 parts surface water. At each sampling station, we collected surface water samples with a bucket lowered from an overpass or dock. Temperature and dissolved oxygen were measured in situ at 3-5 cm beneath the surface with a YSI model 55 dissolved oxygen (DO) probe (YSI Inc., Young Spring, OH). Water samples were split in the field for FIB (E. coli and Enterococci), metagenomic and nutrient analysis. For FIB analysis, unfiltered samples were transported on wet ice and stored at 4 • C. Incubations for quantification of FIB were initiated within 24 h of sampling. FIB samples from pre-HH samples were discarded because we were locked out of our laboratory for several days and sampling holding times were exceeded. Water samples collected before HH were stored on wet ice, returned to the laboratory and filtered to collect microbial samples for metagenomic analysis and to archive nutrient samples within 24 h of collection. Following HH, all samples were filtered in the field immediately upon collection. For metagenomic analysis, water samples were pulled through a Sterivex SVGPL10RC 0.2 µm cartridge (EMD Millipore, Billerica MA) until refusal (no flow at 15 psi), with a hand vacuum pump, as described previously (LaMontagne and Holden, 2003). The volume filtered, which ranged from 75 and 300 ml, was measured with a graduated cylinder. Sample filtrates and cartridges were transported on wet ice, temporarily stored at −20 • C, and archived at −80 • C. Two technical replicates were generated by collecting duplicate samples from station J on the eve of the storm and from the sewage-spiked samples described above. Laboratory Methods Dissolved inorganic nutrient analysis for ammonium, orthophosphate, and nitrate were done by colorimetric analysis in microplates, as described previously (Ringuet et al., 2011). E. coli and Enterococci were enumerated using Colilert and Enterolert in the Quanti-Tray/2000 format following manufacturer recommendations (IDEXX, Westbrook, ME). Microbial community DNA for metagenomic analysis was recovered from the Sterivex cartridges as described previously (Amaral-Zettler et al., 2008) and assessed for molecular weight by agarose gel electrophoresis. These crude extracts were subsequently purified by passage through a OneStep PCR Inhibitor Removal column (Zymo, D6030) and purity was assessed by UV-spectra. Metagenomic analysis followed protocols outlined in the Earth Microbiome Project (Caporaso et al., 2011). Briefly, the V4 region of the 16S rRNA gene was amplified to generate an amplicon library. This library was multiplexed using Illumina designed indices, pooled with equal amounts, and sequenced on an Illumina MiSeq instrument as described by Caporaso et al. (2012). Data Analysis Water quality data were analyzed and figures were generated with custom scripts presented in Supplementary Files 2, 3. These scripts included functions from packages from ggplot2 (Wickham, 2009). This data set included water quality data collected as described above and public data previously collected by the Texas Commission for Environmental Quality (TCEQ) and cooperating organizations. Data from the TCEQ archive was limited to samples collected between January 1st, 2011 and May 6th, 2021. For this time period, the mean for each of these five segments was calculated by grouping by segment, month and year. This data range was then merged with water quality data generated from samples collected in this study to generate Supplementary File 4. MiSeq data were processed to determine alpha diversity of the microbial community using functions from DADA2 v. 1.20.0 (Callahan et al., 2016), with custom scripts presented in Supplementary File 5. Briefly, reads were filtered, trimmed, denoised, and merged to yield sequences from 251 to 253 nucleotides long. Chimeras were then removed with the function removeBimeraDenovo in DADA2 and putative nonchimeric sequences were assigned taxonomy and aligned with the functions IdTaxa and AlignSeqs in Decipher v 2.20.0 (Wright et al., 2012). Amplicon sequence variants (ASVs) and taxonomic identifications were merged to create a phylogseqclass object-available as Supplementary File 6-with functions in phyloseq v 1.36.0 (McMurdie and Holmes, 2013). ASVs with uncertain taxonomic identification at the phylum level were then removed before fitting the alignments into a phylogentic tree with functions in phangom v 2.7.1 (Schliep, 2010). Technical replicates (two samples collected at the same time and place but processed independently) were then merged and metadata (volume filtered, environmental conditions, FIB counts, DIN, etc.) and reference sequences were combined to create a phylogseq-class object-available as Supplementary File 7with functions in phyloseq and Biostrings (v 2.60.2 Pagès et al., 2021). Reference sequences were also exported in fasta format and compared to public sequences with the Seqmatch application (RDP Taxonomy 18) available through the Ribosomal Database Project (Cole et al., 2013). Default settings were used in Seqmatch. MiSeq data is available in the NCBI SRA under accession number/Bioproject ID: PRJNA795782. Alpha diversity (richness and Shannon indices) was estimated with the plot_richness function after sewage-spiked samples were removed. Analysis of variance, calculated with a core function in R version 4.1.1, was used to test for significance of differences between samples collected before HH made land fall (sampled August 25th), vs. samples collected immediately after the storm (August 30th) and in September and October. Significance of differences was assessed with a Tukey test using the function HSD.test in the R package agricoloe v 1.3.5 (de Mendiburu, 2020). Coverage of the library of reads used for diversity analysis was visualized with the function rarecure in vegan v 2.5.7 (Oksanen et al., 2020). The relationship between microbial community structure and nutrient levels was determined with correspondence analysis with custom scripts presented in R markdown in Supplementary File 8. This workflow started with a phyloseq object (S7). A prevalence threshold of 10% was set to remove rare taxa. Counts of the remaining 1,616 ASVs were transformed with Hellinger option prior to non-metric multidimensional scaling analysis (NMDS) with functions in vegan v 2.5.7 (Oksanen et al., 2020). NMDS was first run with all 44 samples, prior to removal of sewage-spiked samples. Goodness of fit of NMDS ordination was visualized with a Shepard plot generated prior to fitting meta-data to the ordination with functions in vegan. The resulting ordination plots were visualized with functions available in R package ggordiplots v 0.4.0 (Quensen, 2020). Functional composition was predicted from the 1,616 ASVs used in NMDS analysis (above) with PICRUSt2 v 2.3.0-b (Douglas et al., 2020). To prepare the data, an ASV abundance table and fasta files were exported from a phyloseq object (S09) to a biom file (S10) and a sequence file (S11) using R package biomformat v 1.20.0 (McMurdie and Paulson, 2021). This pipeline, including bash scripts used in PICRUCSt2 analysis, are presented in Supplementary File 12. Differential abundance of pathways (Supplementary Files 13, 14) predicted from ASVs and ASVs themselves was assessed using functions in R package ANCOM-BC v 1.2.2 (Lin and Peddada, 2020), following scripts presented in Supplementary File 15. Pathways that were differentially abundant were plotted with functions available in R package Heatplus v 3.0.0 (Ploner, 2021), following scripts presented in Supplementary File 16. Pathway functions and expected taxonomic range associated with them were identified with the web application MetaCyc v 25.5 (Caspi et al., 2013). Environmental Conditions Hurricane Harvey lowered the salinity for Clear Lake. On the eve of HH (August 25th, 2017), surface salinity at stations C, K, and J, which correspond to water body segments 2425 and 2425B (Figure 1), ranged from 9 to 12 practical salinity units (PSU, Figure 2A). These pre-HH salinity levels are within the 95% confidence interval of the 10-year average for salinity for records for these two water body segments (2425 and 2425B, Supplementary Figure 1). Immediately after the storm, salinity dropped to < 1 PSU at all stations sampled herein and then gradually increased to pre-storm levels over the next 2 months (Figure 2A). Oxygen levels in Clear Lake and tributaries to that system did not show a strong response to HH. Across all segments, oxygen levels averaged 6.8 mg/L before HH and 5.2 mg/L after. This temporal difference was not significant (p = 0.131); however, spatial differences between segments were significant (p = 0.0003). Oxygen levels averaged 6.9-7.1 mg/L for segments 2425 and 2425B, respectively, and less than 6 mg/L for stations 1101C, 1101, and 1113B ( Figure 2B). Lowest oxygen concentrations were observed at station H in waterbody segment 1113B, where DIN concentrations are relatively high (see below). Hurricane Harvey lowered the concentration of dissolved inorganic nutrients (DIN). On the eve of the storm, nitrate/nitrite (NO x ) ranged from 1 to 59 µM (Supplementary Figure 3). These pre-HH NO x levels are in the range for records for the last 10 years for these segments (Supplementary Figure 3). Immediately after the storm, NO x ranged from 1 to 9 µM and varied significantly between segments (p < 0.001) and type (pre-storm, HH, and post-storm). Highest levels of DIN were observed for samples collected from segment 1113B, which Boxplots of oxygen levels in the Clear Lake system. Sample type "pre" indicates samples collected on August 25th (before HH). Type "HH" indicates samples collected on August 30th (immediately after HH). Type "post" indicates samples collected from September 8th to October 30th. Type "TCEQ" indicates historical data collected by TCEQ and partner agencies during 2011-2021. is approximately 50 m downstream of the outfall pipe of a wastewater treatment plant. NO x showed a non-conservative mixing relationship with salinity (Figure 3). That is the system is a sink NO x . High concentrations (>10 µM) were associated with samples that showed salinities of 3 PSU or less. In contrast, samples with higher salinities (>3 PSU), typically showed NO x levels of 3 µM or less, which suggests a freshwater source. Ammonium and phosphate levels showed a similar pattern with salinity as NO x . High concentrations of ammonium (Supplementary Figure 4) FIGURE 3 | Mixing diagram of salinity vs. nitrate/nitrite for the Clear Lake system. Note DIN data is not available for the sample collected before HH from waterbody segment 1113B. Segments and types are as Figure 2. and phosphate (Supplementary Figure 5) were associated with low salinities. DIN/P ratios were generally below 16 (Supplementary Figure 6). These ratios were on average highest (7-8) for segments 1101C and 1113B, respectively, and < 2 for segments 2425 and 2425B. Fecal Indicator Bacteria E. coli levels ranged from 488 to 1,733 MPN/100 ml for the six stations sampled on September 1st, 2017 (Figure 4), which was 72 h after HH pasted over the study area. The geometric mean (GM) for this set of samples was 1,018 MPN/100 ml. These values exceeded the statistical threshold value (STV) for single samples and GM recommended for water contact by the EPA (USEPA, 2012) and by the TCEQ for these particular water body segments (TCEQ, 2013). After 1 week, the E. coli levels had decreased to < 100 MPN/100 ml and remained relatively low until the end of October, when levels spiked again. Enterococci levels ranged from 63 to 3,050 MPN/100 ml for samples collected in the fall of 2017; however, because of logistical FIGURE 5 | Alpha diversity in Clear Lake system before and after Hurricane Harvey. Y -axis indicates Shannon diversity indices. Categories correspond to before (pre), immediately after (HH) and more than a week after (post). issues, Enterococci levels were not measured until September 18th. For this period (post-HH), Enterococci levels did not differ between segments sampled (Supplementary Figure 7), and 22 of 24 samples exceeded the STV for single samples recommended for recreational water contact by the EPA (USEPA, 2012); GM (495 MPN/100 ml) exceeded, by an order of magnitude, the GM recommended for recreational water contact by the EPA (USEPA, 2012). FIB counts of samples taken in the post-HH period, were significantly higher (p < 0.001) than counts for the same segments collected over the last decade, where the GM of Enterococci was 74 MPN/100 ml. Microbial Diversity Alpha diversity of the bacterial and archaeal community did not differ significantly between samples collected before and after HH (Figure 5). Average Shannon diversity indices ranged from 4.95 to 4.89 for samples after the event and averaged 4.44 for samples collected immediately before the storm. Diversity was relatively lower for samples collected before HH at stations 1101 and 1113B but only one sample was collected at that time point (Supplementary Figure 8). Average richness ranged from 761 to 633 for samples collected after the event and 488 for samples collected before (Supplementary Figure 9). Rarefaction analysis suggested the sequence library appeared to have the depth to describe alpha diversity (Supplementary Figure 10). After quality control, which included removing sequences that did not classify at the phylum level, average depth of the library was 112,178 reads. In other words, all 44 samples reached an asymptote. Beta diversity of the bacterial and archaeal community structure, as assessed by NMDS, differed significantly between FIGURE 6 | Non-metric multidimensional scaling analysis (NMDS) of microbial community structure for samples collected before and after Hurricane Harvey. NMDS was run on the abundance of amplicon sequence variants as described in Methods. Type indicates segment (see Figures 1, 2) sampled and season: "brack" corresponds to segments 2245 and 2245B, "fresh" corresponds to segments 1101, 1101C, and 1113B, "sewage" indicates a sewage-spiked sample and "HH" indicates samples collected immediately after HH. Eclipses were drawn to highlight indicated two coherent clusters supported by 95% confidence intervals. Figure was generated with scripts in Supplementary File 8. samples collected before and immediately after HH (Figure 6). The good fit (r 2 = 0.999) of a stressplot ( Supplementary Figure 11), and low stress value (0.036), suggest this model is an excellent fit (Dexter et al., 2018). NMDS showed two clear clusters. Three samples collected before HH in segments (2425 and 2425B), that showed brackish salinities (8-12 PSU), formed a coherent cluster, with similarity to samples collected in the same segments in fall. Samples collected immediately after HH also formed a coherent cluster, with similarity to a sample spiked with sewage. One sample collected in March 2018 in segment 1113B clustered with the HH samples. Recovery of Clear Lake progressed from the summer through the fall in segments (2425 and 2425B). These segments typically have brackish conditions. In the summer following HH the microbial community within these stations within Clear Lake looked similar, in terms of NMDS, to communities sampled from freshwater tributaries to the estuary (Figure 6). This recovery of the estuary's microbiome appeared driven by salinity (Supplementary Figure 12). Salinity appeared strongly (p = 0.001) associated with pre-HH samples. NO x and oxygen appeared strongly (p = 0.016 and 0.020, respectively) associated with post-HH samples. Phosphate also appeared associated with post-HH samples but the significance was weak (P = 0.086). Bacterial community structure of the Clear Lake system shifted from a system dominated by Cyanobacteria before HH to a system dominated by Proteobacteria and Bacteroidetes immediately after (Figure 7). SAR324 clade (Marine group) was relatively abundant before HH and in the fall in segments we defined as brackish (2425 and 2425B). A total of 59 phyla were detected in 7,491 ASVs generated from 44 samples. Almost all of these ASVs (7,410/7,491) classified as bacteria. After removing taxa with relatively low (<10%) prevalence, almost all of the ASVs (1,534/1,617) were found to be differentially abundant, at a significance threshold of P < 0.05, in a model that tested the factors: salinity, NOx, and sample type (pre, HH, and post). In other words, these factors predicted the abundance of 95% ASVs. Sample type predicted the majority of abundances. For example, the abundance of 819 (51%) ASVs differed between pre-HH and HH samples and the abundance of 1,007 (62%) ASVs differed between pre-HH and post-HH samples. Of the 10 most abundant ASVs observed in samples collected immediately after HH, nine classified as γ-Proteobacteria. Most of these (7/9) classified within the family Comamonadaceae and showed similarity to bacteria typically observed in freshwater systems. For example, ASV18 showed similarity to Limnohabitans curvus MWH-C1a, which was isolated from a lake (Hahn et al., 2010). The other two highly abundant ASVs (ASV64 and ASV23) showed similarity to γ-Proteobacteria isolated from rhizosphere soil (Jung et al., 2007) and freshwater systems (Hahn, 2003), respectively. ASV6, the most abundant ASV in libraries generated from floodwater samples, accounted for 5-17% of the reads generated in those six libraries. This ASV showed similarity to Aquirufa strains isolated from lakes (Hahn, 2006;Lee et al., 2018). The ASV showed the greatest differential abundance between pre-HH and HH samples (ASV103) showed high similarity to two uncultured bacteria (KP686762 and KP686755) generated from floodwater collected in North Carolina immediately after Hurricane Irene (Balmonte et al., 2016). PICRUSt2 analysis predicted the abundance of 418 metabolic pathways from 1,616 ASVs generated from 44 samples. Differential abundance analysis suggested that 76 of these pathways were significantly different between samples. Cluster analysis, based on the relative abundance of these 76 pathways, suggested that floodwater samples formed a coherent group (Figure 8). That is, with one exception (sample eH), floodwater samples were relatively similar to each other in terms of predicted pathways. The outlying sample was also similar to samples collected immediately after HH in terms of numerically abundant phyla. In particular, eH and floodwater samples showed relatively high proportions of Proteobacteria and Bacteroidetes. Comparison of pathways predicted from samples collected immediately before and after HH, identified 29 differentially abundant pathways (Supplementary Figure 12); 14 of these were significantly higher in samples collected before HH and 15 were significantly higher after the storm. Half (7/14) of the pathways associated with samples collected before HH were biosynthesis pathways. These include PWY-5347, which produces methionine and PWY-5840, which produces menaquinol-7. In contrast, only 3 of 14 pathways that were differentially abundant in samples collected immediately after HH were biosynthesis pathways, and two of these biosynthetic pathways are associated with virulence. PWY0-1338 confers resistance to the antibiotic polymyxin and PWY-6143 produces pseudaminic acid, which is associated with pathogenic Gram negative bacteria (Schirm et al., 2003). The vast majority (11/15) of pathways that were more abundant in samples collected immediately after HH were degradation pathways. These include pathways ORNDEG-PWY, ARGDEG-PWY, and ORNARGDEG-PWY, which are associated with degradation of L-arginine, putrescine, 4-aminobutanoate, and L-ornithine (Caspi et al., 2013). Salinity and NO x levels appeared associated with the abundance of 30 pathways. Of these 12 of were associated positively with salinity and 13 were associated negatively Figure 13). All but one of the pathways positively associated with salinity were biosynthesis pathways. These included five pathways (PWY-6165, 6349, −6350, −6654, −6167) associated with archaea and PWY-622, which is associated with starch biosynthesis by photoautotrophs (Caspi et al., 2013). In contrast, 6 of 13 pathways negatively associated with salinity were degradation pathways. These included two pathways (PWY-5427, −6956) associated with naphthalene degradation by bacteria and PWY-5088, which is associated with glutamate degradation by members of the Firmicutes phylum (Caspi et al., 2013). NO x concentrations appeared associated with the abundance of five pathways (Supplementary Figure 14). The two positively associated pathways were degradation pathways; both are associated with mandelate degradation by Proteobacteria. Pathways negatively associated with NO x concentrations include PWY-6174, which is associated with the mevalonate pathway in archaea, and PWY-5183, which is associated with toluene degradation by Proteobacteria (Caspi et al., 2013). DISCUSSION Rising sea levels and warming waters, associated with global warming, are predicted to increase the frequency of coastal flooding (Vitousek et al., 2017). Global warming is also expected to increase the severity of hurricanes (Knutson et al., 2021). These climate driven changes could alter the structure of coastal systems and offshore systems (Shore et al., 2021) and more frequently bring many people into contact with floodwater. This creates a public health risk (Cann et al., 2012;Du et al., 2012). The response of the system and risks to the populace will vary depending on the system and storm. Here we studied the water quality and microbial communities of samples collected from the Clear Lake system, a rapidly developing area between Houston and Galveston. Hurricane Harvey temporarily shifted the structure of the Clear Lake system from a brackish (∼ 10 PSU), estuary, fed by eutrophic, fresh tributaries, to a freshwater system, with little difference between the lake and tributaries in terms of salinity, nutrients and other chemical parameters. The temporary shift to a freshwater system was accompanied with a dramatic, temporary, decrease in cyanobacteria. In parallel, γ-Proteobacteria, which are typically observed in soils and freshwater systems, increased. This pattern of dilution and recovery is consistent with a model of the recovery time for salinity in that system (Du and Park, 2019), but is a few weeks slower for the time reported for salinity recovery for Galveston Bay (Steichen et al., 2020). Overall, the recovery of the Galveston Bay system appears slower than estuaries impacted by Hurricane Bob (Valiela et al., 1998) and estuaries impacted by multiple hurricanes in North Carolina (Peierls et al., 2003) and the shift in bacterial community structure is consistent with changes reported following HH for Galveston Bay (Yan et al., 2020). Bacteria dominated this system, as assessed by metagenomic analysis of PCR-amplified 16S rRNA gene fragments, and the structure of this community corresponded to salinity, DIN and oxygen concentrations. The relationship between salinity and bacterial community structure parallels a report that salinity corresponded to changes in viral community structure in Galveston Bay following HH (Woods et al., 2022). These results also agree with a previous study of systems in Louisiana impacted by Hurricanes Katrina and Rita (Amaral-Zettler et al., 2008) and with previous reports for estuaries in general (Tee et al., 2021). The strong influence of DIN corresponds to the dogma that nitrogen limits productivity in coastal systems. That is, if nitrogen limits primary production, a change in nitrogen availability would change the entire system. Indeed, low N/P ratios suggests that nitrogen limits productivity in Clear Lake, which is consistent with Ryther and Dunstan's dogma (Ryther and Dunstan, 1971); however, only inorganic nutrients were measured herein. Organic matter also contains significant pools of nitrogen and phosphate. For example, in Galveston Bay total nitrogen concentrations were about 5X higher than DIN concentrations for samples collected following HH (Steichen et al., 2020). Oxygen was not depleted significantly in water segments sampled herein following HH, relative to pre-storm levels and historical records, but oxygen levels did relate to microbial community structure (Supplementary Figure 12). Hypoxic conditions (<3 mg/L) were only observed once in this study. This agrees with a previous report for Bayous in the Houston-Galveston area, where relatively rural watershed receiving waters, like Peach Creek, did not go hypoxic, with the exception of the headwaters of Clear Creek (Kiaghadi and Rifai, 2019). The general lack of hypoxia in this system contrasts with previous reports for other systems in the Gulf of Mexico. For example, hypoxia persisted in Pensacola Bay for months following Hurricane Ivan (Hagy et al., 2006) and floodwaters overlying New Orleans were hypoxic following Hurricane Katrina (Pardue et al., 2005). High E. coli MPNs for samples collected immediately after HH, suggests that floodwaters were contaminated with fecal matter. These elevated MPNs agree with previous reports for Bayous within the Galveston Bay system (Yu et al., 2018;Kiaghadi and Rifai, 2019;Yang et al., 2021), for the Guadalupe River (Kapoor et al., 2018), which was also in the path of HH, and the report of Enterobacteriaceae in marine sponges offshore of Galveston Bay (Shore et al., 2021). The EPA and TCEQ (2013) recommend Enterococci for estuaries and coastal waters; however, because of logistical issues, Enterococci MPNs were not available for several weeks after HH. Levels of these FIB remained elevated relative to typical levels for this system for weeks (Supplementary Figure 7). These high MPNs agree with the observation that bacteria typically observed in human waste, such as Bacteroides spp., abounded in libraries generated from all samples collected immediately following HH (Figure 7). PICRUSt2 analysis suggested that flooding also enriched for antibiotic resistant genes (ARG), virulence factors and carbon cycling pathways. These predictions of functional genes from rRNA sequences, and the inference of microbial community structure from targeted metagenomic analysis in general, should be treated with caution. Every step in targeted metagenomic analysis, from sampling to data analysis is fraught with bias (Pollock et al., 2018). In particular, PICRUSt2 depends on reference genomes, which are largely derived from the human gut microbiome. This creates a bias depending on the sample type (Sun et al., 2020). For example, PICRUSt2 underestimates certain pathways in soil systems (Toole et al., 2021). Prediction of increase in carbon cycling bacteria agrees with reports that loading of dissolved organic carbon (DOC) during extreme weather events can enhance carbon cycling by bacterial communities in receiving waters (Balmonte et al., 2016) and high DOC levels in Galveston Bay following HH (Steichen et al., 2020;Yan et al., 2020). Because of the velocity of water moving through the system, metabolic pathways associated with floodwaters sampled herein were ephemeral and do not suggest long term changes in microbial community functions for the Clear Lake system. Nevertheless, prediction of ARG and virulence factors with PICRUSt2 in samples collected immediately following HH suggests that these floodwaters could pose a public health risk. The abundance of these virulence factors agrees with previously published qPCR measurements of ARG in samples collected from soils flooded during HH (Pérez-Valdespino et al., 2021), in samples collected within Galveston Bay 2 weeks after HH (Yang et al., 2021), and ARG and pathogens in floodwaters and bayous following HH (Yu et al., 2018). CONCLUSION The massive influx of freshwater from Hurricane Harvey into the Clear Lake system temporarily changed the system from a brackish estuary with relatively low levels of FIB and a microbial community dominated by primary producers, to a freshwater system with high levels of FIB. The microbial community observed immediately following the hurricane included bacteria that have also been reported in estuaries following hurricanes, but rarely elsewhere, and enrichment of antibiotic resistant bacteria. Recovery of the system to pre-storm conditions, in terms of nutrients and salinity, exceeded 2 months. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. AUTHOR CONTRIBUTIONS ML, GG, TG, and MA conceived of the study. ML and GG selected the sampling locations and collaborated with TG on initial sample processing. ML prepared the samples for metagenomic analysis. YZ and MA conducted the metagenomic analysis and the conducted initial bioinformatics. ML wrote the scripts for bioinformatics and generated the figures. ML, YZ, GG, TG, and MA wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the NSF awards 1759542 and 1759540. ACKNOWLEDGMENTS We thank Diep Le and Theodore Richardson for assistance with laboratory and data analysis support, respectively. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2022.875234/full#supplementary-material Supplementary Figure 8 | Alpha diversity of bacterial communities in the Clear Lake system before and after Hurricane Harvey. Shannon diversity indices were calculated by targeted metagenomic analysis, as described in Methods. Boxes, whiskers and horizontal lines are described in Figure 2. Sample types are described in Figure 3. Figure was generated with scripts in Supplementary File 5. Supplementary Figure 9 | Microbial richness in Clear Lake system before and after Hurricane Harvey. X-axis indicates number of amplicon sequence variants. Categories correspond to before (pre), immediately after (HH) and more than a week after (post). Figure Supplementary Figure 12 | Fit of environmental data to NMDS model of microbial community structure. Stress value for model was 0.032, which suggests an excellent fit. Environmental variables that showed a significant (P < 0.10) relationship with community structure are shown. Only samples with DIN data available are shown. Note conductivity and salinity vectors were practically identical to each other and would overlap, so only salinity vector is shown. Figure
2022-01-23T16:25:28.701Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "8f49fa35182855314799dd34e685444660c9e298", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2022.875234/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2841560b116f6d014fe1117742d90cf5c2f765e4", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
18335942
pes2o/s2orc
v3-fos-license
Microbiome and the future for food and nutrient security Microbiome and the future for food and nutrient security Global demands for food and fibre will increase up to 70% by 2050. This increase in agricultural productivity needs to be obtained from existing arable land, under harsher climate conditions and with declining soil and water quality. In addition, we have to safeguard our agricultural produce from new, emerging and endemic pests and pathogens. Harnessing natural resources including the 'phytomicrobiome' is proposed to be the most effective approach to improve farm productivity and food quality in a sustainable way, which can also promote positive environmental and social outcomes. Conventional farming that uses chemicals in the form of fertilizers and pesticides has substantially increased agriculture productivity and contributed immensely to food access and poverty alleviation goals. However, excessive and indiscriminate use of these chemicals has resulted in food contamination, negative environmental outcomes and disease resistance which together have a significant impact on human health and food security. The microbiome technology has the potential to minimize this environmental footprint and at the same time sustainably increase the quality and quantity of farm produce with less resource-based inputs. Plants and associated microbiota evolved together and have developed a mutualistic relationship where both partners benefit from the association. However, plant breeding programmes have unintentionally broken this association, resulting in the loss of key beneficial members of the crop microbiome. From the limited knowledge obtained to date, it is evident that crop yields and fitness are linked to the plant microbiome. Harnessing the plant microbiome therefore can potentially revolutionize agriculture and food industries by (i) integrating crop health with better management practices for specific climatic conditions to improve productivity and quality; (ii) using environmental friendly approaches to control pests and pathogens and thus reduce the use of chemical pesticides with environmental and health implications; (iii) considering smarter and efficient methods for using natural resources including soil and water; (iv) producing a better quality of food with less chemical contamination and allergens; and (v) minimizing losses by improving crop fitness in extreme weather or future change scenarios. Rhizosphere versus phytomicrobiome approaches The phytomicrobiome consists of microbiota associated with all plant compartments (e.g. root, stem, leaf, flower, seeds). However, the majority of research in this area is focussed on the rhizosphere microbiome, which drives key interface interactions between plant roots and soils in terms of resource acquisition and plant health. A body of work has demonstrated the key role of the rhizosphere microbiome in nutrient acquisition, disease resistance, resilience to abiotic stresses and fitness in novel environments. However, due to technical challenges the phytomicrobiomes of other plant-associated niches (leaf, stem, endophytes) have received much less attention. Such bias is linked to technical challenges associated with characterizing leaf, stem and other parts of the plant. Amplifying bacterial marker genes (16S rDNA) from plant tissues is challenging as bacterial DNA is overwhelmed by the chloroplast and mitochondrial DNA that show high sequence similarities with Chlorobi/Chloroflexi/Cyanobacteria phyla. In recent years, the use of peptide nucleic acid (PNA) that blocks the amplification of contaminant sequences has helped to improve the efficiency of bacterial amplicon sequencing. The sequencing of fungal amplicons has been technically easier, the lack of universal primers to provide a consistent, unbiased overview limits the information on the fungal members of the phytomicrobiome. Application of technologies (such as shotgun sequencing) that can provide a comprehensive overview of the functional potential of the phytomicrobiome remains challenging given the microbiome sequences are masked by plant sequences resulting in extremely low coverage of the microbial metagenome from plant tissues. Technologies which can specifically enrich microbial DNA/RNA from plant materials are needed. Although with a low efficiency, some commercial kits selectively enrich bacterial mRNA and have the potential to circumvent this issue for the bacterial community to some extent; however, similar technologies are needed for the fungal phytomicrobiome given fungi play a significant role in both nutrient use efficiency and plant protection against biotic and abiotic stresses. In addition to the technical issues highlighted, the lack of a holistic approach for plant microbiomes is based on the assumption that the rhizosphere microbiota plays the most important role in plant productivity. It can be argued that based on limited available evidence, the root (rhizosphere and root endophytes) may play a more important role in nutrient uptake, while other sections of the plant microbiome play a stronger role in the defence of pathogen and pest attacks, and resource use efficiency, thus affecting quality and quantity of plant yield. However, this may not be true in all cases and may be crop-, regionand climate-specific. Therefore, we firstly need a complete characterization of the phytomicrobiome associated with crop varieties and different compartments (e.g. leaf, stem, root) grown under different environmental and climatic conditions. This will allow us to characterize the core microbiome of crop species and distinguish between varieties and environmental conditions. Further characterization of the role of the core microbiota in crop fitness and yield, combined with identifying the metabolic pathways of the microbiome, will help in designing tools to manipulate the phytomicrobiome to sustainably increase agriculture productivity and the quality of food. The uniqueness of the phytomicrobiome among different niches and varieties provides both opportunities and challenges for harnessing the phytomicrobiota for increasing agriculture productivity, improving quality of food and sustaining environmental functions. There is significant evidence to suggest that ecological functions performed by the phytomicrobiome extend a plant's ability to adapt to different environmental conditions and changes (Bulgarelli et al., 2013), which is of primary significance for the plant's fitness considering their sessile lifestyles. However, traditional crop breeding programmes do not consider key components of crop fitness, i.e. phytomicrobiota, and as a result, some weakness of crops to biotic and abiotic stress is attributed to this negligence. Excessive use of agrochemicals has also negatively impacted the strength of this relationship. Therefore, future breeding programmes will need to use a combination of genetic information from the host and metabolic pathways from the associated microbiomes. Such an approach is critical to ensure all intended benefits of breeding programmes without losing beneficial microbiota. This in turn potentially impacts plant fitness and resilience against biotic and abiotic stress. Going forward, the use of agrochemicals (particularly fertilizers) will remain an important ingredient for agriculture; however, its precision use combined with better chemistry and improved breeding programmes to explicitly consider the health of phytomicrobiome will be integral to sustainably increase agriculture productivity and food quality. Ability to manipulate the microbiome in situ In recent years, our understanding has improved regarding the levels of soil biodiversity, drivers of biodiversity in agriculture systems and relationships between biodiversity and ecosystem functions including nutrient availability and agriculture productivity. Key knowledge on the critical role played by the microbial community in the rhizosphere, particularly in nutrient acquisition and disease resistance, has also improved. But our ability to manipulate the microbial diversity for improved production is either limited to altering management practices (i.e. tillage, residue retention, use of agrochemicals, etc.) or through the addition of microbial inoculates. The use of microbial inoculants has so far limited success in field conditionsmainly due to competition with the indigenous microflora of soils. However, there is strong evidence to suggest that plants and their associated microbiota (particularly of rhizosphere) constantly communicate with each other for resource requirements and defence against pathogen and parasite attacks. However, we have limited knowledge on the communication (signal) molecules used by plant or microbes for these communications. Identifying these signal molecules should be a primary focus of research as this can provide an effective tool for manipulating plant-microbe interactions for maximizing resource availability and plant protection. For example, signal molecules (or their inhibitors) could be used to specifically promote the activity of beneficial microbes, to increase microbial mobilization of nutrients (nitrogen and phosphorus) and to defend against pathogens and pests when needed (synchronized supply with demand). However, this is a significant challenge given that the quantity of signal molecules in root exudates and microbial biofilms is extremely low and is difficult to characterize by available technology. Along with increasing the sensitivity of different spectroscopies, an integrated approach of metagenomics, metatranscriptomics and Biotechnology, 10, 50-53 metabolomics will be needed to characterize signal molecules, their diversity and specificity to harness these for improving farm yields and quality. In situ microbiome engineering (Mueller and Sachs, 2015) can be the choice of tool for harnessing the microbiome for beneficial outcomes in agriculture and food industries. This technology proposed to manipulate the microbiome without culturing and move beyond current technologies such as the use of selective antibiotics and probiotics (Sheth et al., 2016). Synthetic biology will play an important role, to engineer novel but predictable functions in crop probiotics which upon addition to plant and soils will manipulate the microbiome and/or its activities in a predicted fashion. For example, bacteria could be engineered to modulate microbiomes or crop physiology by secreting specific chemicals, which in turn enhance crop resilience against resource and biotic stresses by stimulating the activities of beneficial microbiomes. When fully functional, these tools have the potential to revolutionize agricultural productivity and bring similar levels of productivity gains as observed during the green revolution. However, this is a mid-to long-term goal for larger scale uses in agro-ecosystem, given the complexities of the soil microbiota and the variety of signal molecules they utilize. Personalized food and nutrient security In developing countries, the focus will be to increase agricultural productivity to ensure food security, whereas in developed countries, nutrient security and healthy food will become main policy drivers. An emerging concept is personalized diet/nutrients for better health outcomes. This will require food to be grown differently to minimize chemical contamination and reduce the concentration of natural allergens. Personalized diets will explicitly consider individual genetics, physiology and differences in microbiomes and their metabolic activities. Initial research supports the case for personalized diets as no two individuals respond identically to the same food, suggesting a key role for host-microbiome interactions in nutrient outcomes. For example, an important role in glucose haemostasis and obesity has been found in the gut microbiota. Together, this evidence challenges the traditional concept of a healthy diet with an optimized diet based on the unique host-microbiome make-up (Zeevi et al., 2015). In future, people will be grouped based on their microbiomes for personalized diets. This can herald a new era of healthy lifestyle and prevention of metabolic (diabetes, heart disease) and physiological (allergy to natural compounds) conditions. In addition, probiotic cocktails will be developed and used to suppress known allergens or to affect nutritional uptake for specific food to minimize the negative impact of allergens on sensitive individuals. Current global initiatives There has been tremendous interest in harnessing the microbiome for increasing agricultural productivity. In 2016, two key initiatives have been launched which explicitly recognize the potential of the microbiome approach. (i) The White House has launched the US microbiome initiative on 14 May 2016 with an investment of $450 million to enhance innovation and commercialization and for developing new, related industries. Crop and soil microbiomes are a core component of this initiative and are working closely with the Phytobiome initiative to ensure success. (ii) The EU Commission has launched the International Bioeconomy Forum (IBF) on 13 October 2016, and harnessing microbiomes for food and nutritional security is their first and key component, along with regional economic growth and job creation. Both initiatives envisage public-private partnership models as the key for rapid innovation and commercialization of products. There are also a number of large and small industries investing heavily in microbiome research, which is clear recognition of the commercial benefits of microbiome research with critical environmental and social benefits. For example, it is predicted that in the EU, a higher number of bio-pesticides will be sold compared to chemical pesticides by 2020. The agricultural and nutrient sector is a key area of development in microbial biotechnology along with health sector and will be an important driver of global economic growth and social and environmental sustainability. Box 1 Key steps towards successfui use of microbiome tools for food and nutrient security Concluding remarks In summary, we envisage the development of technologies which will allow the manipulation of the crop microbiome in situ. These technologies will become an integral part of the sustainable increase in agricultural productivity ensuring food and nutrient security for future global populations. If this is to be achieved, both theoretical and technological advancements are needed (Box 1), utilizing multidisciplinary approaches to integrate emerging technologies (omics, 3D printing, synthetic biology) with more traditional approaches of microbial ecology, plant ecophysiology and genetics. These approaches will then be further embedded with remote sensing, satellite and sensor-based technologies with the ability to handle big data to realize the true potential of microbiome tools in agriculture and food sectors. In addition, challenges associated with social and regulatory policies will require simultaneous attention. Public acceptance of microbiome-based products will be crucial for the success of these technologies, and multidirectional communication among all stakeholders will ensure success. Standardization of regulatory requirements at intergovernmental levels will provide easy access to the market, but at the same time ensure efficacy and safety of these products to maintain public confidence in technologies. There are significant challenges to achieve the potential microbiome approach for food and nutrient security, but these are dwarfed by the potential economic, environmental and social benefits of taking this approach. For example, in addition to food and nutrient security, microbiome tools can substantially increase economic performance by commercializing new products, improve environmental health by reducing chemical contamination and create jobs in green industries.
2018-04-03T01:15:07.784Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "1dcab881735cc18474e26f9c9ffffb9e2124aa42", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/1751-7915.12592", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e12f4777ca7fc226baae6bbf8f1f1a60d96fe556", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244224196
pes2o/s2orc
v3-fos-license
Evolution, Transport Characteristics, and Potential Source Regions of PM 2.5 and O 3 Pollution in a Coastal City of China during 2015 – 2020 : The evolution, transport characteristics, and potential source regions of PM 2.5 and O 3 were investigated from 1 January 2015 to 31 December 2020 in the coastal city of Nantong. The annual mean PM 2.5 concentration declined obviously over the entire study period, and was 34.7 μg/m 3 in 2020. O 3 had a relatively smooth decreasing trend, but rebounded greatly during 2017 when the most frequent extreme high-temperature events occurred. Similar trends were observed for PM 2.5 and O 3 polluted hours. No PM 2.5 -O 3 complex air pollution happened in 2019 and 2020, likely sug-gesting the preliminary results from the implementation of emission controls. Notable differences in transport pathways and frequencies were observed from the backward trajectory clusters in four seasons in Nantong. Clusters with the largest percentage of polluted PM 2.5 and O 3 trajectories were transported mostly over short distances rather than long distances. Analysis involving the potential source contribution function (PSCF) and concentration weighted trajectory (CWT) showed that PM 2.5 polluted sources were from the adjacent western and northwestern provinces, whereas the influence of eastern marine sources was relatively small. O 3 had a greatly different spatial distribution of polluted source regions from PM 2.5 , mostly covering the North China Plain, the Bohai Sea, and the Yellow Sea. Introduction Fine particulate matter (PM2.5) and ozone (O3) are two of the largest contributors to air pollution in the tropospheric atmosphere due to their impact on human health, environmental degradation, vegetation production, and climate change [1][2][3]. Complex emissions and adverse meteorological conditions normally led to high PM2.5 and O3 concentrations [4][5][6]. Apart from directly emitted particulate matter, both ground-level PM2.5 and O3 are mainly secondary pollutants. Secondary PM2.5 and O3 share similar precursors (e.g., nitrogen oxides (NOx) and volatile organic compounds (VOCs)) in photochemical reactions [7,8]. Besides, the secondary PM2.5 is also formed by coagulation and nucleation of chemicals from direct emissions. Given the big challenge of controlling both PM2.5 and O3 pollution due to their highly nonlinear secondary formation, reducing emissions of NOx or VOCs for PM2.5 control might lead to unexpected adverse effects on O3 in the photochemical processes [8,9]. In addition, air pollution might worsen due to regional, longrange transport and unfavorable meteorology conditions, even when local emissions are reduced. Thus, both pollutants are of great concern for regional air pollution improvement. Currently, eastern China is an industrial and urbanized area with the densest population and highest emissions nationwide [10,11]. Due to the complex formation of PM2.5 and O3 from multiple sources and precursors, integrated tackling of these two pollutants has become one of the largest challenges facing this region. Although PM2.5 concentrations have declined in this region compared with previous years after stringent pollution mitigation measures taken since 2013, severe air pollution events still occur under some stagnant weather conditions [12]. In addition, O3 showed an increasing trend, especially during the summers in the past few years [13,14]. Numerous studies have been conducted to explore the evolution and transport characteristics of PM2.5 and O3, as well as the influence of meteorological conditions in eastern China. However, most of these studies focused on megacities such as Shanghai, Nanjing, and Hangzhou which were severely polluted [15][16][17][18][19][20][21]. Few studies of PM2.5 and O3 have been performed in less polluted cities such as Nantong in this region. Nantong is one of many fast-growing coastal cities with a population of 7.72 million in Jiangsu Province, and is adjacent to Shanghai in the south across the Yangtze River. As with other cities in eastern China, Nantong is also suffering problems of PM2.5 and O3 complex air pollution with its rapid growth of industrialization. However, there have only been very limited studies focused on pollutant characteristics and their relationship with meteorological conditions over short periods of one year or less in Nantong [22]. To achieve better synergic control strategies for the pollution of PM2.5 and O3 in Nantong, it is urgent to strengthen the understanding of their long-term pollution characteristics, transport pathways, and potential source regions. To fill the knowledge gap, in this study an insight into the evolution, transport characteristics, and potential source regions of ground-level PM2.5 and O3 in Nantong during the 2015-2020 period was presented. The evolution of individual air pollutants, as well as nonattainment complex air pollution, were investigated. The transport pathways and potential source regions of PM2.5 and O3 were identified and synthetically analyzed using the backward trajectory cluster, the potential source contribution function (PSCF), and concentration weighted trajectory (CWT). Consequently, these results will provide an important basis for exploring efficient strategies to control both PM2.5 and O3 pollution in Nantong. Site Location Air pollutants of PM2.5 and O3 in the coastal city of Nantong from 2015 to 2020 were investigated here ( Figure 1). Located on the north wing of the Yangtze River estuary, and with 206 km of coastline, Nantong (32.01° N, 120.86° E) is one of the vital coastal port cities to foreign investment since the beginning of reform and opening up in Jiangsu Province, located at the Middle-Lower Yangtze Plain. At present, Nantong is one of the many fastgrowing and traditionally industrial cities, with its gross domestic product breaking the 1-trillion-yuan threshold in 2020. However, atmospheric environmental problems have brought much attention in the city with the rapid development. Besides, Nantong has a humid subtropical climate, with four distinct seasons influenced by complex climate systems such as seasonal monsoons and changeable weather, which contribute to a substantial influence on air pollutant emissions, formation, and transport pathways. Data and Analysis Methods Real-time, hourly concentrations of air pollutants, including PM2.5 and O3, at seven national air quality monitoring stations in Nantong were published on an online platform by the China National Environmental Monitoring Centre (CNEMC), while historical data are not openly available. Thus, we used historical data of PM2.5 and O3 from 1 January 2015 to 31 December 2020 by one provider (https://quotsoft.net/air/archived, accessed on 10 June 2021). A sanity check was conducted on the hourly data at individual sites to remove problematic data points before calculating average concentrations and parameters. The citywide hourly mean concentrations of PM2.5 and O3 were calculated by averaging hourly data at all sites in the city, which were used in the analysis, as well as daily, monthly, and annual mean concentrations. The three-hourly meteorological data containing the wind, temperature, and humidity were obtained from the US National Centers for Environment Prediction's Global Data Assimilation System (ftp://arlftp.arlhq.noaa.gov/pub/archives/gdas1, accessed on 10 June 2021) with a grid resolution of 1° × 1°. To explore the influence of air mass transport on PM2.5 and O3, 72-h air-mass backward trajectories at 500 m arrival height above ground level were calculated by the Hybrid Single-Particle Lagrangian Integrated Trajectory model (HYSPLIT) [23]. This height was chosen to represent a well-mixed convective boundary layer for regional investigation [24,25]. The model was set as four times a day at the starting times of 0:00, 6:00, 12:00, and 18:00 local time during the study period. The number of trajectories was 2164 in spring, 2184 in summer, 2160 in autumn and 2144 in winter over the entire study period. The multiple backward trajectories were clustered in four seasons using Euclidean distance in this study. The most representative cluster number was determined as five using the "eye ball" method in the TrajStat software by plotting the percent change in total spatial variance (TSV) against the number of clusters. This graph first presented a monotonic increase, and thereafter a sudden increase. The cluster number before the first sudden increase was chosen for the clustering process [26,27]. PM2.5 and O3 concentrations were grouped according to the seasonal trajectory clusters. The hourly PM2.5 and O3 concentrations of 75 μg/m 3 and 200 μg/m 3 were defined as the "polluted" thresholds referring to the Ambient Air Quality Standard (GB3095-2012), respectively [28]. PM2.5-O3 was defined as the complex air pollution event with mean PM2.5 concentration exceeding 75 μg/m 3 and O3 exceeding 200 μg/m 3 simultaneously. The potential source areas in different seasons were determined using the potential source contribution function (PSCF) and concentration-weighted trajectory (CWT) methods combining with pollutant concentrations of the receptor site [29]. The investigated area was divided into 1° × 1° small grid cells (i × j) with equal size in both methods. The PSCF value of the ijth cell was defined as: where nij represents the total number of trajectory endpoints falling in the ijth cell, and mij is the number of endpoints when the receptor concentrations exceeded the threshold criterion set at the mean concentrations of PM2.5 and O3 of each season in Nantong ( Table 1). The areas with higher PSCF values denoted the greater probability of potential source locations. However, the PSCF method failed to distinguish the grid cells with the same PSCFij when the pollutant concentrations slightly or prominently exceeded the threshold criterion. The CWT method was used to overcome this limitation [30,31]. In the CWT method, a weighted average of pollutant concentration was assigned to each grid cell, as follows: where M and l represent the total number of trajectories and the index of the trajectory, respectively. Cl represents the observed pollutant concentration with trajectory l arriving in cell ij. τijl is the time spent by trajectory l in the ijth cell. Additionally, an arbitrary weight function (Wij) was applied to minimize the uncertainty of PSCF and CWT values resulting from small nij values. The Wij was expressed as: where nave denotes the average value of the endpoints in each cell. Thus, the weighted PSCF and CWT values were computed as follows: Evolution Characteristics of PM2.5 and O3 The evolution trends of annual pollutant concentrations in Nantong were investigated first ( Figure 2 and Table 2). From 2015 to 2020, PM2.5 and O3 presented a net decreasing trend of −3.7 μg/m −3 and −1.2 μg/m −3 per year, respectively. Very different evolution characteristics were observed for PM2.5 and O3. PM2.5 declined obviously and steadily over the entire period except for a slight rebound in 2018, while O3 in 2017 bounced back to levels higher than those in 2015, which was attributed to the most frequent extreme hightemperature events (14 days above 35 ℃) that year. These results are consistent with a previous study [32]. In addition, the O3 trend was relatively smooth over the six years. Although considerable reductions of PM2.5 were observed, pollution control measures did little to O3 due to its complicated nonlinear photochemistry formation, which relied on precursor diagnosis and meteorological conditions. Notably, in 2020, the average PM2.5 concentration was down to 34.7 μg/m 3 below the minimum safe level of 35 μg/m 3 according to ambient air quality standards for residential areas, which was likely due to the drastically reduced emission of primary air pollutants by lockdown measures during the COVID-19 outbreak between January and February 2020 [33]. The long-term variations of mean PM2.5 and O3 concentrations in different seasons were investigated as well (Figure 3). The mean PM2.5 concentrations decreased in all seasons over the entire study period except for the rebound in autumn of 2018 related to the unfavorable diffusion conditions of low wind speeds, high relative humidity, and inversion layers. Among the four seasons, the highest concentrations with the most obvious declination of PM2.5 was observed in winter. However, the decline of PM2.5 slowed down in recent years. In addition, compared with PM2.5, the O3 concentrations first increased then decreased in all seasons with peak values in 2017 (spring, summer, winter) or 2018 (autumn) but changed slightly in general. Higher concentrations with larger fluctuations were observed in summer and spring than in autumn and winter. Those results were consistent with the yearly patterns shown in Figure 2. Transport Characteristics To identify the transport pathways of air masses, back trajectory clustering was utilized. Five major cluster pathways and corresponding statistical results for each season over the entire period were shown in Figure 5 and Table 3. Generally, longer trajectories corresponded to higher velocity of air mass movement. The ratios of clusters during four seasons were relevant to the seasonal monsoons in Nantong, with a prevailing northerly wind in winter, a prevailing southerly wind in summer, and a transition in spring and autumn. In addition, variable weather conditions had a substantial impact as well. In spring, cluster 2 was the predominant pollution pathway accounting for 46.62% (59.57%) of PM2.5 (O3) polluted trajectories, respectively, followed by cluster 3. In addition, the mean PM2.5 concentration of cluster 2 was the highest among all clusters at 53.66 μg/m −3 , while cluster 1 had the maximum mean O3 concentration at 87.00 μg/m −3 . Cluster 2 air masses were short-range sources moving slowly from the nearby industrial provinces of Zhejiang and Jiangxi to the southwest likely picking up considerable anthropogenic aerosols. Cluster 3 originated from South Korea and then traveled southerly over the Yellow Sea. Clusters 1, 4, and 5 represented long-range transport and fast-moving trajectories from Russia and Inner Mongolia with air masses containing soil and dust. In summer, clusters 2 and 5 were both from the southwest, but traveled short and long pathways from nearby provinces and the South China Sea, respectively. Air masses in cluster 3 were the cleanest with the lowest PM2.5 and O3 loadings (16.77 ± 9.10 μg/m 3 , 50.32 ± 37.50 μg/m 3 ) originating from the Pacific Ocean directly. In general, southerly clusters 2, 3, and 5 contributed 56.60% of all trajectories, which were consistent with the prevailing southerly monsoon. Cluster 4 was from South Korea and passed over the Yellow Sea. Cluster 1 came from inner Mongolia, passing through multiple provinces before arriving at Nantong. In addition, clusters 1 and 2 contributed the largest percentage (80.00%) of polluted O3 trajectories in summer (Table 3). Air masses of most clusters in summer had relatively lower PM2.5 and higher O3 concentrations than those in other seasons. In autumn, all clusters except cluster 5 with a total ratio of 91.90% gathered trajectories from the north. Among all clusters, cluster 1 had the highest ratio of trajectories (41.02%) and polluted PM2.5 and O3 trajectories (38.24% and 46.67%), which originated from the Yellow Sea. Cluster 5 originated from Jiangxi Province, passed through Anhui Province with the lowest ratio of trajectories, however, it had the highest mean PM2.5 concentration at 63.83 μg/m −3 . Cluster 3 and 4 were free of O3 polluted trajectories, with air masses from the Mongolia and Japan Sea, respectively. In winter, north and northwest clusters prevailed, comprising 93.75%. The cluster 1, 3 and 2 were from similar northwest directions but distinct transport distances. Among these clusters, the 2nd cluster showed the greatest occurrence probability as well as ratios of polluted PM2.5 trajectories. Besides, the cluster 2 originated from Shandong Province with shorter trajectories, likely picking up more local and anthropogenic air masses. Notably, although cluster 5 had higher PM2.5 concentrations than cluster 2, it had a limited impact on PM2.5 concentrations in Nantong due to its least ratio among all clusters. There was no O3 pollution event in winter on account of the unfavorable weather conditions for photochemical reactions. Given the above, the main factors impacting the PM2.5 and O3 polluted trajectories in each season of Nantong were sources from nearby short-distance rather than long-distance. Additionally, as a coastal city, marine air masses played a very important role as well as those from the adjacent provinces. Figures 6 and 7 show the PSCF and CWT results for different seasons in Nantong. As an auxiliary, the CWT values can help quantify the relative contribution of pollutants in each grid compensating for the weakness of PSCF. Generally, the greater PSCF and CWT values denoted higher contributions to PM2.5 and O3 concentrations. For PM2.5, in all seasons, source regions from the western adjacent provinces were with higher PSCF (>0.6) and CWT (>60 μg/m −3 ) values, compared to the marine source areas with lower PSCF (<0.3) and CWT (<30 μg/m −3 ) values. As a result, the main factors impacting the PM2.5 pollution in Nantong were sources from inland areas, covering the Anhui, Henan, Hubei, Shanxi, and Shaanxi Province, and as far as inner Mongolia, rather than marine areas. Most of the potential source domains were distributed from southeast to northwest clockwise in all seasons, which were consistent with the prevailing wind direction. The largest domain of potential sources exceeding the mean concentration of PM2.5 occurred in autumn according to the PSCF results, followed by winter, then spring and summer. However, the CWT analysis indicated that the concentrations of potential sources were the greatest exceeding 100 μg/m −3 in winter. Therefore, a comprehensive analysis using both the PSCF and CWT values is necessary. Besides, polluted air masses mostly came from the northwesterly clusters contributing 84.32% of all polluted trajectories in winter (Table 3). PSCF and CWT Modeling of Source Regions The O3 potential source regions had a similar pattern with PM2.5 in terms of the whole distribution area. However, the locations of more polluted source regions were much different. In addition to the source regions from the North China Plain, air masses over the Bohai Sea and the Yellow Sea also contributed a great deal to O3 concentrations in Nantong. It was likely due to the transport of O3 and its precursors by the transition between land and sea breeze circulation near the northern industrial coastal cities, which is consistent with the results of previous studies [34][35][36]. The severely polluted source regions varied seasonally. The polluted trajectories traveled roughly northwest-southeast, in spring, autumn and winter. Unlike these seasons, major severe sources of O3 in summer came mostly from the southwest to northeast clockwise with the largest polluted area and the greatest values exceeding 100 μg/m −3 . Meanwhile, these areas accounted for 97.5% of the polluted trajectories in summer as shown in Table 3. Conclusions A comprehensive characterization of evolution, transport, and potential source regions of PM2.5 and O3 were investigated from 1 January 2015 to 31 December 2020 in Nantong. The annual evolution of PM2.5(O3) concentrations and corresponding trends of pollution hours were presented in detail. The transport pathways and potential source regions of PM2.5 and O3 were identified and determined by cluster analysis, PSCF, and CWT methods, respectively. The major conclusions were as follows: The annual mean PM2.5 concentration declined obviously from 56.5 μg/m 3 to 34.7 μg/m 3 over the entire study period. O3 had a relatively smooth decreasing trend, but rebounded greatly during 2017 when the most frequent extreme high-temperature events occurred. Similar trends were observed for PM2.5-O3 polluted hours with some fluctuations, with a sharp decrease from 2015 to 2016 and then an increase to the peak values in 2018. No PM2.5-O3 complex polluted event happened in 2019 and 2020 indicating the preliminary effect of the implementation of emission controls. Notable differences in transport pathways and frequencies were observed in four seasons in Nantong. Air masses of most clusters in summer had the lowest (highest) PM2.5 (O3) concentrations than those in other seasons. Clusters with the largest percentage of polluted PM2.5 and O3 trajectories were from the southwest adjacent provinces in spring and summer, but the northwest adjacent provinces in winter and the northeast ocean near Nantong in autumn, which was mostly short-distance sources rather than long-distance transport sources. The PSCF method mainly focused on sources identification to calculate and describe possible source locations while the CWT method can distinguish the source strength more easily by assigning the concentrations values at the receptor site. The PSCF and CWT results showed that PM2.5 sources in Nantong were from the adjacent western and northwestern provinces with higher PSCF (>0.6) and CWT (>60 μg/m −3 ) values, and the influence of marine sources was relatively small with lower PSCF (<0.3) and CWT (<30 μg/m −3 ) values. The O3 potential source regions had a similar distribution pattern but significantly different polluted source regions with PM2.5. Apart from the source regions of O3 from the North China Plain, potential sources from the Bohai Sea and the Yellow Sea also contributed a great deal, which is attributed to transport of O3 and its precursors by the transition between land and sea breeze circulation near the northern industrial coastal cities. In addition, the severely polluted source regions of PM2.5 and O3 varied seasonally. Polluted air masses of PM2.5 mostly came from the northwesterly clusters contributing 84.32% of all polluted trajectories in winter, while major severe sources of O3 from the southwest to northeast clockwise accounted for 97.5% of the polluted trajectories in summer. The results presented here suggest that, despite the effort made, control of PM2.5 and O3 emissions from the adjacent provinces will further play a significant role in achieving compliance with the air quality standard in Nantong. Nonetheless, a detailed further investigation of the impact of meteorological conditions on pollution transport pathways is still needed, which will provide an important scientific basis to explore efficient air pollution reduction strategies.
2021-10-18T17:25:48.613Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "345298419df56f0a439c6718f8f3d7d0ff407e49", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/12/10/1282/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8948cd3d61ff521ee1d13e5ce162126e9d81354f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
2961400
pes2o/s2orc
v3-fos-license
Discovering Characteristic Landmarks on Ancient Coins using Convolutional Networks In this paper, we propose a novel method to find characteristic landmarks on ancient Roman imperial coins using deep convolutional neural network models (CNNs). We formulate an optimization problem to discover class-specific regions while guaranteeing specific controlled loss of accuracy. Analysis on visualization of the discovered region confirms that not only can the proposed method successfully find a set of characteristic regions per class, but also the discovered region is consistent with human expert annotations. We also propose a new framework to recognize the Roman coins which exploits hierarchical structure of the ancient Roman coins using the state-of-the-art classification power of the CNNs adopted to a new task of coin classification. Experimental results show that the proposed framework is able to effectively recognize the ancient Roman coins. For this research, we have collected a new Roman coin dataset where all coins are annotated and consist of observe (head) and reverse (tail) images. Introduction The ancient Roman coins have not only bullion values from precious materials such as gold and silver, but also they provide people with beautiful and historical arts of relief. They were first introduced during the third century BC and continued to be minted well across Imperial times. The major role of the Roman coins was to make an exchange of goods and services easy for the Roman commerce. Another important role in which researchers in numismatics have been interested is to convey historical events or news of the Roman empire via images on the coins. Especially, the Roman imperial coins were used to provide political propaganda across the empire by engraving portraits of the Roman emperors or important achievement of the empire. As the Roman imperial coins are closely connect to the historical events of the empire, they could serve as importance references to understand the history of the Roman empire. In this paper, we aim at automatically finding visual characteristics of the ancient Roman imperial coins which make them distinguishable from the others, as well as recognizing their identities. To achieve these goals, we collected Roman imperial coin images with their descriptions. We used the Roman Imperial Coinage (RIC) [22] to annotate the collected coin images. RIC is a comprehensive numismatic catalog of Roman imperial currency which is the results of several decades of work. The RIC provides a chronological catalog of the coins from 31 BC to 491 AD with description of both the obverse (head) and reverse (tail) sides of the coin. Figure 1 shows example observe and reverse images and their descriptions. For the purpose of the classification, we use the catalog number of RIC as a label to predict. Automatic methods to identify the ancient coins have been attracted as a growing number of the coins are being traded everyday over the Internet [4,12]. One of the main issues in the active coin market is to prevent illegal trade and theft of the coins. Traditionally, coin identification depends on manually searching catalogs of coin markets, auctions or the Internet. However, it is impossible for the manual search to cover all trades because the coin market is very active, for example, over a half million coins are traded annually only in the north American market [12]. Therefore, automatic identification of the ancient coins becomes significant. Several works on coin classification have appeared in the computer vision field. Some proposed methods use the edge detection of the engraved image on the coin [23,26]. Others represent the coin images as local features such as SIFT [20] and perform the classification [14]. Methods using the spatial pyramid models [3] and orientations of pixels [4] are proposed to exploit the spatial information. Aligning coin images using the deformable part models has refined the recognition accuracy over the standard spatial pyramid models [15]. In this paper, we propose an automatic recognition method for the Roman imperial coins using the convolutional neural network models (CNNs). Recently, the CNN models have shown the state-of-the-art performance in various computer vision problems including recognition, detection and segmentation [8,11,27,28], driven by the increasing availability of large training dataset and the improvement of the computational power of the GPUs. In this paper, we propose a hierarchical framework which employs the CNN models for the coin classification tasks by fine-tuning a pre-trained CNN model on the ImageNet dataset [7]. Second, we propose a novel method to find characteristic landmarks on the coin images. Our method is motivated by class saliency extraction proposed in [24] to find class-sensitive regions. In this paper, we formulate an optimization problem so that a minimal set of the parts on the coin image will be selected while the chosen set is still be recognized as the same category as the full image by the CNN models. We consider the chosen parts are deemed the persistent, discriminative landmarks of the coin. Such landmarks can be critical for analysis of coin features by domain experts, such as numismatists or historians. The contributions of the paper can be highlighted as follows: 1) a new coin data set where all the coins have both observe (head) and reverse (tail) images with annotations, 2) a new framework of recognizing the Ancient Roman coins based on the CNNs, 3) a new optimization-based method to automatically find characteristic regions using the CNNs while guaranteeing specific controlled loss of accuracy. Related Work There have been several methods to recognize coins in the computer vision field. Bag-of-words approaches with extracted visual descriptors for the coin recognition were proposed in [2,3,4,15]. A directional kernel to consider orientations of pixels [4] and an angle histogram method [2] were proposed to use the explicit spatial information. In [3], rectangular spatial tiling, log-polar spatial tiling and circular spatial tiling methods were used to recognize the ancient coins. Aligning the coin images by the deformable part model (DPM) [9] further improves the recognition accuracy over the standard spatial pyramid model [15]. In this paper, we use the CNNs which exploit the spatial information by performing the convolution and handle the displacement of the coin image by performing the max-pooling. The Roman imperial coin classification problem can be formulated as the fine-grained classification as all coins belong to one super class, coin. To identify one class from the other looking-similar classes, which is one of the challenges in the fine-grained classification, people have conducted research on the part-based models so that objects are divided into a set of smaller parts and classification is performed by comparing the parts [5,10]. However, those methods require annotated part labels while training, which takes an effort to obtain. In this paper, we investigate an automatic method to find discriminative regions on the coins which does not depend on human's effort. With the impressive performance of the deep convolutional neural network models, a lot of papers have been proposed to understand why and how they perform so well and give insight the behaviors of the internal layers. The deconvolutional network [27] visualized the feature activities in the intermediate layers of the CNN models by mapping features to pixels in the reverse order. A data-driven approach to visualize the receptive field of the neuron in the network was proposed in [30]. The method in [30] is based on the exhaustive search using the sliding-window technique and measures the difference between presence and absence of one window on the coin image. In [24], they propose an optimization method to reconstruct a representative image of a class from an empty image by calculating a gradient of the CNN model with respect to the image. In this paper, we propose a novel method to find discriminative landmarks of the coin image by formulating on optimization problem. Unlike [30] which requires exhaustive CNN evaluations for the sliding windows, our method effectively finds a set of discriminative regions by performing the optimization. Proposed Method In this section, we first describe how to train our convolutional neural network model for the task of the Roman imperial coin classification. Then, we propose a novel method to discover characteristic landmarks which make one coin distinguishable from the others. Training Convolutional Neural Network for Coin Classification The convolutional neural network (CNN) is the most popular deep learning method which was heavily studied in 1990s [18]. Recently, a large amount of labeled data and computational power using GPUs make it possible that the convolutional network becomes the most accurate object classification method [16]. Let S c (x) be the score of class c for input x, which is fed to a classification layer (e.g., 1000-way softmax layer in [16]). Assuming that the softmax loss function is used, the loss function c of the CNNs can be defined as: where w are the weights of the complex, highly structured, deep CNNs. Then, stochastic gradient descent is used to minimize the loss c by computing gradient with respect to w as ∂ c /∂w. Although the CNN models are successful when there exists large amount of labeled data, they are likely to perform poorly on small datasets because there are millions of parameters to be estimated [29]. To overcome the limitation on the small data, a method to finetune the pre-trained model for new tasks was proposed, having shown successful performance [19,28,31]. In the fine-tunning method, we only need to change the softmax layer (which is the usually the last layer of the CNN models) appropriately for the new task. Considering the number of the coin images in our dataset (about 4500), the CNN model is likely to be under-fitted if we train it only on the coin dataset even if we use the data augmentation method [16]. Therefore, we train a deep convolution neural network (CNN) model in the fine-tuning manner. To achieve the goal, we adopt one of the most popular architecture proposed by Krizhevsky et al. [16] which is pre-trained on the ImagetNet with millions of natural images. Specifically, we change the softmax layer of [16] for our classification purpose, and then finetune the covolutional network under the supervised setting. When training, we resize the original coin image to 256×256 and randomly crop a sub region of size 224 × 224 as the data augmentation discussed in [16]. When testing, we crop the center of the coin. We use the open-source package Caffe [13] to implement our CNN model. Hierarchical Classification Each coin in our dataset has both observe (head) and reverse (tail) images. A straight-forward method to use both images is to feed them together to classifiers (e.g., SVM or CNN) when training. In this paper, we exploit a hierarchical structure of the Roman imperial coins. One Roman emperor includes several RIC labels as shown in Figure 1 while one RIC label belongs to exactly one emperor. Therefore, we can build a tree structure to represent the relationship between the Roman emperors and the RIC labels as depicted in Figure 2. In the Emperor layer, we compute probability p(e|I o ) for Emperor e given observe im- Figure 2: Hierarchical classification for the RIC label. I o and I r are the observe and reverse images, respectively. In the Emperor layer, we compute the probability of RIC label r given reverse image I r (resp., in the RIC layer, probability of e given I o ). Then the final prediction is defined as the product of the probabilities on the path from the root to the leaf. age I o , and in the RIC layer, p(r|I r ) for RIC label r given I r . Then the final probability is defined to be the product of the probabilities on the path from the root to the leaf as p(e|I o ) · p(r|I r ) · δ(P a(r) = e) where P a(r) is the parent of node r and δ(·) is the indicator function. For this purpose, we train two CNN models, one for the RIC label taking the reverse image and the other for the Roman emperor taking observe image. For a given pair observe and reverse images, we evaluate the probabilities on the nodes in the tree and choose the leaf node with the maximum value as the prediction result. Finding Characteristic Landmarks on Roman Coins The coin classification problem can be considered as the fine-grained classification problem as all the images belong to one super class. Finding discriminative regions that represent class characteristics plays an important role in the fine-grained classification. This is specifically true in the context of Roman coins, where domain experts (e.g., numismatists) seek intuitive, visual landmark feedback associated with an otherwise automated classification task. In this section, we introduce our method to discover characteristic landmarks on the Roman coins using the CNN model. We define the characteristic region set as the smallest set of local patches sufficient to represent the identity of the full image and distinguish it from other available classes. Several approaches have been presented in the past that attempt to identify intuitive/visual class characteristics of CNNs [21,24,25]. However, their main purpose is largely to reconstruct a representative, prototypical class image and not necessarily find the discriminative regions. Unlike the previous methods, the proposed method starts from specific input image and removes visual information deemed irrelevant for the coin's accurate classification as an instance of the same class. Let I and I(i) be the vectorized image and the ith pixel intensity of image I, respectively. Let r k , 1 ≤ k ≤ K, be the set of indices that belongs to the kth subregion in image I. The subregion could be a superpixel, a patch from the sliding window with overlapping, or even one pixel. We define I k to represent the kth subregion as follows: Then we define a mask function 1], which maps image I to the masked image f I (x) as a function of x: where ⊗ is the element-wise product and C is a normalization vector counting how many times a pixel appears across the subregions as . x k controls the transparency of the subregion r k so that x k = 1 represents that the subregion has the full pixel intensity while x k = 0 implies that the region is transparent. We would like to find an image such that the image consists of the smallest set of regions but still can be correctly classified by the original CNN model, with some small, controlled loss in confidence. With the definition of f I (x), we formulate our goal as follows: where c (·) is the loss function of the CNN model for class c, 1 is a vector of all ones, R(·) is a regularization function and λ is a hyper parameter to control the regularization. We place the constraint so that the prediction probability of the masked image f I (x) may differ from the original image f I (1) at most . Because we are interested in absence or presence of a region, the L 0 -norm would be an ideal choice for the regularization function. However, it is non-differentiable, making it difficult to optimize the objective function. Therefore we resort to the L 1 norm which is the closest convex, C 0 continuous approximation of the L 0 -norm. Both λ and have similar roles to control the prediction accuracy of the masked image. If we increase λ, p (c|f I (x)) decreases because the optimization puts more emphasis on minimizing the |x| 1 than the loss function. Similarly, large allows the low prediction accuracy of the masked image. Therefore, in this paper we fix λ to 1 and control because can explicitly put the lower bound of the prediction accuracy. We use the negative log of the soft-max function as the loss: where S c is the score for class c as in (1). Optimization in (4) is in general a non-convex problem, a consequence of the non-convex CNN mapping. We approach the minimization task in (4) using a general subgradient descent optimization with backprojection. The gradient can be computed using the chain rule as: . The second component of the gradient, ∂ c /∂f I (x), represents the sensitivity of the CNN output with respect to the input image (region) and can be computed by the backpropagation as discussed in [24]. Note that this quantity differs from the typical sensitivity of loss with respect to the CNN parameters, used in CNN training. Because f I (x) is a linear function of mask x, the gradient ∂f I (x)/∂x is easily computed as . . . is the kth element of the masked image. The standard gradient descent method to minimize (4) may violate the constraint (5) because of the regularization term that enforces sparseness. Therefore, we use the backprojection method for the optimization. We first initialize x to 1 (i.e., full image), then perform the gradient descent. If the violation occurs, we remedy it by taking the gradient with respect to only the loss function without considering the regularization until the constraint is satisfied. During the optimization, the loss function and the L 1 regularization term in (4) compete with each other under the constraint of (5). Minimization of the loss function alone typically requires a large number of regions. On the other hand, the regularization term attempts to select as few landmarks as possible. Because non-discriminative regions usually do not contribute to minimization of the loss function, they are more likely to be removed than the persistent, discriminative regions. Experiments and Results In this section, we explain our experimental settings including the coin data collection. We then discuss the coin classification using the CNN model. Finally, we analyze the results of discovering characteristic landmarks on the coin images. Experimental Settings Data collection: We have collected ancient Roman Imperial coin images from numismatic web sites. As we are dealing with the problem of recognizing given the coin images, we did not consider the coins that are severely damaged or hard to recognize. In the next step, we removed the background of each coin image by a standard background removal method and resized it to 256 × 256. Each coin in the dataset has both observe (front) and reverse images. For the purpose of the classification, we label the coin images according to their RIC [6,22]. We found that the coins with similar descriptions look similar to each other, making them almost impossible to differentiate. Therefore, if the number of the different words in the descriptions for the two coins was less than a threshold (we set it to 2 in this paper), we considered them as the same class and assigned the same label. Finally, we create a new coin dataset consisting of 4526 coins with RIC 314 labels and 96 Roman emperors. Baseline method: As a baseline method, we use the SVM model as described in [15]. In [15], they extracted the SIFT descriptors [20] in the dense manner and used the k-means clustering method to build the visual code book. Then, the image is represented as a histogram of the visual words from the codebook. We also use the spatial pyramid model [17] to exploit the spatial information. In this paper, we use the polar coordinate system as the spatial pyramid as it has shown the best performance in the previous ancient coin recognition approaches [1,15]. The polar coordinate system models r radial scales and θ angular orientations. We empirically use r = 2 and θ = 6. Evaluation measure for classification: For measuring classification performance, we use 5-fold cross-validation with class-balanced partitions: we repeat the experiments 5 times with 4 subgroups as training data and 1 subgroup as test data so that each of 5 subgroups becomes the test data. The classification accuracy is measure by the mean of the diagonal of the confusion matrix and we report the average of the 5 accuracies for 5 data splits. CNN settings: We use the open-source package Caffe [13] to implement our CNN model. We follow the same network architecture as in [16] except the final output layer where we replace the original 1000-way classification layer for our classification purpose (314-way for the RIC label prediction and 96-way for the Roman emperor prediction). We also decrease the overall learning rate while increasing the learning rate for the new layer so that the rest of the model changes slowly while keeping a stronger pace of updates in the final layer [13]. Coin Classification Results We first discuss the fine-tuned CNN models that we use in this paper. Figure 3 depicts how the classification accuracy changes as a function of the iteration number (epoch). As shown in the figure, the classification accuracy remains steady. after 40,000 iterations. Therefore, we fine-tuned our CNN models over 50,000 iterations and use them thereafter. The classification accuracy of the CNN model on the collected coin dataset is given in Table 1. Reverse presents the task to predict the RIC label given the reverse image. In Hierarchy, we predict the RIC label given both the observe and reverse images using the hierarchical classification method as we discussed in Section 3.2. We also show the classification accuracy for Observe which represents the task to predict the Roman emperor given the observe image. Because the number of the emperors is less than the number of the RIC labels, Observe is easier than the other tasks. Hierarchy can get benefit from performing the easy task (the emperor prediction) first followed by the more difficult task of RIC prediction. CNN significantly outperforms SVM in all three tasks leading to up to 20% increase in accuracy. Specifically, CNN shows most significant improvement on Reverse side RIC classification for two reasons. First, there are significantly fewer emperors (96) than RIC labels (314). Figure 4: Confusion matrices of CNN and SVM for Reverse and Hierarchy. In both models, Hierarchy performs better than Reverse. CNN Hierarchy has improved the classification accuracies across all the RIC labels as it takes an advantage of the hierarchical structure of the RIC labels. For visualization, the smoothed heat map is used. Next, the structure of Reverse side is typically more complex than that depicted on Observe, consisting of wellstructured face profiles. The convolutional feature of CNN is able to more effectively exploit the spatial information than the spatial histogram used in SVM. Coins with the same RIC label have few consistent characteristic landmark regions and the CNN model is able to locate them effectively. On the other hand, SVM has to depend on the fixed structure of the spatial pyramid model which may not be appropriate for some specific RIC labels. We will discuss the recovery of the discriminative regions found by the CNN models in Section 4.3. The confusion matrices for the classification of the RIC label are depicted in Figure 4. Hierarchy outperforms Reverse in both CNN and SVM models as it exploits the hierarchical structure of the RIC labels. To better understand this phenomenon, we select two classes that are confused by Reverse but Hierarchy can distinguish them correctly as shown in Figure 5. The confusion caused by the similarity between the reverse images can be removed using the differently depicted observe images. Discriminative Regions and Landmarks In this section, we first examine how the selected regions and confidence values from the CNN model change as a function of . For this purpose, we choose one reverse and one observe images and vary from 0.1 to 1.0. Note that = 1 implies that the constraint in (4) will never be violated. Figure 6 shows the visualization of discovered landmarks as a function of . Because larger allows smaller confidence value, the total area of the characteristic parts becomes smaller, i.e. very essential parts are remained. Therefore as we increase , relatively less significant regions are first removed on the coin. For example, Venus in the upper panel holds a small apple which is considered as the characteristic part at first. However, as we increase , the size of the discriminative areas becomes smaller and finally the apple turns out less significant than the toss. When = 1, no constraint is placed during the optimization. Therefore, the gradient decent method tries to find the mask as sparse as possible without considering the correct prediction, having the discovered regions meaningless. On the other hand, the discriminative regions on the observe images change slowly. Unlike the reverse where different characteristic symbols appear in variable locations, the observe images have common structures, i.e. profiles of the Roman emperors. Therefore, the observe images need more parts to remain distinguishable from the others than the reverse images. As shown in Figure 6, head and bust remains present for all values. Figure 7 depicts the visualization of the discovered landmarks on both reverse and observe images with two different sliding windows (11 × 11 and 21 × 21). We set to 0.5 and choose the coins that are correctly classified by the CNNs for the experiments. The results confirm that the proposed method is robust with respect to the window sizes. Moreover, the coins with the same RIC label, (a) and (b), (c) and (d), (g) and (h) in Figure 7, share the similar landmarks. The results imply that there exists a set of characteristic regions per class, class-specific discriminative regions. As we will see next, such regions indeed point to intuitive visual landmarks associated with RIC descriptions. Qualitative analysis: There is no ground truth information available for the discriminative regions. Therefore, we qualitatively analyze our proposed method with two different schemes. First, we qualitatively compare our proposed method with recently proposed approaches [24,30]. In [30], they identify which regions of the image lead to the high unit activations by replicating an image many times with small occluders at different locations in the image and measuring the discrepancy of the unit activations between the original image and the occluded images. An image patch which leads to large discrepancy can be considered as important to the unit. We use two different sizes of the image patches (11 × 11 and 21 × 21) and measure the difference of the class score (S c in (6)) on a dense grid with a stride of 3. The saliency extraction method which computes the gradient of the CNNs with respect to the image was proposed in [24]. The single pass of the back propagation is used to find the saliency map. For fair comparison, we perform the moving average with the same patch size as the other experiments subsequent to back propagation. The experimental results in Figure 7 show that our method and [30] largely agree with each other. This implies that our method is able to find the important regions which lead to large discrepancy or, equivalently, significant changes in classification accuracy. On the other hand, the saliency extraction method [24] tries to find strong edge areas without considering class-characteristics. For example, it fails to find the shield in Figure 7a and 7b, which both our method and [30] are able to discover. Nevertheless, our proposed method has a distinct advantage over the occlusion-based approach as in [30] in terms of computational time. The method in [30] requires a very large number of CNN evaluations (e.g., more than 5000 for an image of 256 × 256). On the other hand, the proposed method is based on the optimization formulation, usually converging in fewer than 100 iterations, while identifying qualitatively similar landmarks. Next, we use the coin descriptions from RIC to analyze the proposed method. For this purpose, we remove stop words in the RIC descriptions and list the remaining words as depicted in Figure 7. The selected landmarks by the proposed method strongly correlate with the descriptions, such as the shield found in Figure 7a, 7b, 7c and 7d. On the other hand, the apple in Figure 7f is successfully found by our method while the others fail to find it or discover it with little confidence. This attests to the practical utility of the proposed approach in identifying the landmarks consistent with human expert annotations. In addition, the proposed method may assist non-experts in generating a visual guidebook to identify the ancient Roman coins without specific domain expertise. Conclusion We proposed a novel method to discover the characteristic landmarks of the ancient Roman imperial coins. Our method automatically finds the smallest set of the discriminative regions sufficient to represent the identity of the full image and distinguish it from other available classes. The qualitative analysis on the visualization of the discovered regions confirm that the proposed method is able to effectively find the class-specific regions but also it is consistent with the human expert annotations. The proposed framework to identify the ancient Roman imperial coins outperforms the previous approach in the domain of the coin classification by using the hierarchical structure of the RIC labels. Proposed method [30] [ Figure 7: Visualization of discovered landmarks for reverse and observe images. Red denotes more discriminative, blue less significant. The proposed method and [30] agree with each other while the saliency extraction [24] focuses on strong edge areas. Note that the discovered regions are correlated with descriptions from human expert annotations. For visualization, we rescale x * to the range of [0, 1].
2015-07-01T01:10:13.000Z
2015-06-30T00:00:00.000
{ "year": 2016, "sha1": "df526b1bf24344caaf5509928299e9d3b2e101dc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1506.09174", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0fc242737a72b70692813b622c4a19192e845740", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219980380
pes2o/s2orc
v3-fos-license
All Holographic Four-Point Functions in All Maximally Supersymmetric CFTs We present a constructive derivation of holographic four-point correlators of arbitrary half-BPS operators for all maximally supersymmetric conformal field theories in $d>2$. This includes holographic correlators in 3d ${\cal N}=8$ ABJM theories, 4d ${\cal N}=4$ SYM theory and the 6d ${\cal N}=(2,0)$ theory, dual to tree-level amplitudes in 11D supergravity on $AdS_4 \times S^7$, 10D supergravity on $AdS_5 \times S^5$ and 11D supergravity on $AdS_7 \times S^4$, respectively. We introduce the concept of Maximally R-symmetry Violating (MRV) amplitude, which corresponds to a special configuration in the R-symmetry space. In this limit the amplitude drastically simplifies, but at the same time the entire polar part of the full amplitude can be recovered from this limit. Furthermore, for a specific choice of the polar part, contact terms can be shown to be absent, by using the superconformal Ward identities and the flat space limit. Introduction The AdS/CFT duality remains to this day the best tool to study physics at strong coupling analytically. Yet twenty two years since its discovery [1][2][3], we are still on our way to harnessing the full computational power of this correspondence. The duality is simplest to study when there is a maximal amount of superconformal symmetry (i.e., sixteen supercharges). This leads to three possibilities 1 : • M-theory on AdS 4 × S 7 dual to the 3d N = 8 Aharony-Bergman-Jafferis- Maldacena (ABJM) theory [4], with superconformal group OSp(4|8); appendices. Kinematics We focus on the one-half BPS local operators in superconformal field theories which have sixteen supercharges. Such operators O I 1 ...I k k transform in the rank-k symmetric traceless representation of an SO(d) R-symmetry group, with k = 2, 3 . . .. They have protected conformal dimension ∆ k = k, where is related to the spacetime dimension d via = d−2 2 . It is convenient to keep track of the R-symmetry indices by contracting them with null vectors O k (x, t) = O I 1 ,...,I k k (x)t I 1 . . . t I k , t · t = 0 . (2.1) The four-point functions are denoted by and are functions both of the spacetime coordinates x i and internal coordinates t i . We will often leave the k i dependence in G k 1 k 2 k 3 k 4 (x i , t i ) implicit to avoid overloading the notation. We can assume, without loss of generality, that the weights k i are ordered as k 1 ≤ k 2 ≤ k 3 ≤ k 4 . Then we need to further distinguish two possibilities k 1 + k 4 ≥ k 2 + k 3 (case I) , k 1 + k 4 < k 2 + k 3 (case II) . Here x ij = x i − x j , t ij = t i · t j , and E is the extremality The exponents are given by where κ s ≡ |k 3 + k 4 − k 1 − k 2 | , κ t ≡ |k 1 + k 4 − k 2 − k 3 | , κ u ≡ |k 2 + k 4 − k 1 − k 3 | . (2.8) Since t i can only appear in G(x i , t i ) as polynomials of t ij , and G(x i , λ i t i ) = i λ k i i G(x i , t i ) under rescaling, it is clear from (2.4) that G(U, V ; σ, τ ) is a polynomial in σ and τ of degree E. Writing G(x i , t i ) as in (2.4) exploits only the bosonic part of the superconformal group. Fermionic generators imply further constraints, known as the superconformal Ward identities. It is useful to introduce the following change of variables The superconformal Ward identity reads [38] (z∂ z − α∂ α )G(z,z; α,ᾱ) α=1/z = 0 . (2.10) Because G(z,z; α,ᾱ) is symmetric under z ↔z and α ↔ᾱ, three more identities follow from the above identity by replacing z withz, and α withᾱ. The traditional method: diagrammatic expansion The traditional recipe to calculate holographic follows from a standard diagrammatic expansion in AdS. More precisely, one obtains the effective action on AdS d+1 , by performing a Kaluza-Klein reduction of the D dimensional supergravity theory on S D−d−1 . For tree-level four-point functions, the relevant information to be extracted from the effective action is the cubic and quartic vertices. One then uses these vertices to write down all the possible exchange and contact Witten diagrams, and the four-point correlator is given by the sum Here the number of exchanged fields in a specific four-point function is always finite. They are dictated by two selection rules on the cubic couplings. The first is an R-symmetry selection rule, which says that the R-symmetry representation carried by the exchanged fields (say in the s-channel) must appear in the common tensor product of the external representations (i.e., the overlap of the tensor product of rank k 1 , k 2 symmetry traceless representations, and that of k 3 , k 4 ). The second is a cutoff on the conformal twist of the exchanged fields ∆ − < min {k 1 + k 2 , k 3 + k 4 } , (2.12) which arises from the requirement that the effective action must remain finite. We organize the relevant exchanged fields into superconformal multiplets in the table below [39][40][41][42], where the super primary scalar field s p is the bulk dual of the one-half BPS operator O p . The fields A p,µ and C p,µ are vector fields in AdS, and A 2,µ is the graviphoton field dual to the R-symmetry currents on the boundary. ϕ p,µν are the symmetric traceless spin-2 tensor fields, which include the graviton with p = 2, dual to the stress tensor operator. t p and r p are scalar fields. In the We can write the exchange contributions more explicitly as p is the contribution from the multiplet p. Here W (s) ∆, are the standard exchange Witten diagrams in the s-channel with dimension ∆ and spin . Y {d 1 ,d 2 } are R-symmetry polynomials of σ and τ (see Appendix A for details), associated to the exchanged irreducible representation labelled by the R-symmetry quantum numbers {d 1 , d 2 }. Historically, such R-symmetry structures were obtained by gluing together three-point spherical harmonics. However, it is more convenient to obtain them by solving the two-particle quadratic R-symmetry Casimir equation [43], making Y {d 1 ,d 2 } the compact analogues of conformal blocks. The coefficients λ field in (2.16) are pure numbers, which can be fixed by using the explicit cubic vertices and appropriately taking into account the normalization of Y {d 1 ,d 2 } . Finally, G con contains contact Witten diagrams up to four derivatives, and all possible Rsymmetry structures. The simplest zero-derivative contact Witten diagram is denoted by theD-functionD ∆ 1 ∆ 2 ∆ 3 ∆ 4 in the literature, and higher-derivative contact diagrams can be related to the zero-derivative ones by using differential recursion relations. The contact diagram contribution could in principle be computed when the quartic vertices are known. Though clear physically, the traditional method suffers from several severe practical drawbacks. First of all, extracting the vertices, especially the quartic vertices, from the effective action is extremely hard. The general quartic vertices are only known for IIB supergravity on AdS 5 ×S 5 [44], where their complicated expressions filled 15 pages. Second, as one increases the external dimensions (more precisely, the extremality E), one is greeted by a proliferation of exchange Witten diagrams. Finally, the exchange Witten diagrams are only tractable in position space when the quantum numbers of fine tuned. When the spectrum satisfies the conditions the exchange Witten diagrams can be written as a finite sum of contact diagrams [45]. This is the case for AdS 5 × S 5 and AdS 7 × S 4 . However, the conditions are not satisfied by the AdS 4 × S 7 background. These practical difficulties make it clear that this brute force approach is extremely cumbersome at best, and unlikely to yield any general result unless powerful underlying organizing principles can be identified. Bootstrap methods In recent years, a number of powerful bootstrap methods [15,16,35,36,46,47] have been developed to efficiently compute holographic correlators, which have superseded the traditional method. These bootstrap methods exploit symmetries and self-consistency conditions, and fix the correlators by making no reference to the explicit details of the effective Lagrangian. Below we give an overview for these methods, and discuss their respective strength and limitations. The position space method. A first improvement of the traditional algorithm was made in [15,16], and was termed the position space method. The idea is to leave λ field in (2.16) as unfixed parameters, and parameterize the most general contact contribution G con with unknown coefficients. In models where the truncation conditions (2.20) are satisfied, one can write the exchange Witten diagrams in terms of a finite number ofD-functions. Furthermore, theD-functions can be uniquely decomposed as where Φ(U, V ) is the scalar box diagram in four dimensions, and the coefficient functions R X (z,z) are rational functions of z andz. One then imposes the superconformal Ward identities (2.10), which can be cast into the same form (2.17) by using differential recursion relations of Φ(U, V ). The superconformal Ward identities uniquely fix all the unknown coefficients in the ansatz, up to an overall rescaling factor. This method has the advantage of being very concrete, and sidesteps the need of obtaining the complicated vertices. On the other hand, the method is applied on a case by case basis, and runs out of steam for higher weight external operators. The position space method can be applied to supergravity theories on AdS 5 × S 5 [15,16], AdS 7 × S 4 [35] and AdS 3 × S 3 × K3 [47] 4 backgrounds. 5 However, it is not applicable to 11D supergravity on AdS 4 × S 7 where the exchange Witten diagrams do not truncate. Finally, the expressions of holographic correlators in position space usually are highly complicated, and beg for a more transparent representation which we now introduce below. • Intermezzo: Mellin space. A useful tool for holographic correlators is the Mellin representation formalism [17,18]. This formalism was exploited in the methods below and later will also be the language of this paper. In the Mellin representation where Q m, (t, u) are degree-polynomials in t and u. The residues Q m, (t, u) vanish for m ≥ m 0 when the conditions (2.20) are satisfied where one Mandelstam variable is eliminated from Q m, (t, u), and P −1 (s, t) is a degree-( − 1) polynomial. In (2.19) we have absorbed the regular terms into the numerator. This is related to the fact that exchange Witten diagrams are not uniquely defined. We can add to them any contact terms with degree − 1, which correspond to choosing different on-shell equivalent cubic couplings. • The Mellin algebraic bootstrap method. A more elegant method was formulated in [15,16,35], which rephrased the task of computing holographic four-point functions as solving an algebraic bootstrap problem in Mellin space. This method exploits the special structure of the correlators as dictated by the superconformal Ward identities where G 0 is a protected part of the correlator that does not contribute to the Mellin amplitude. D is a differential operator determined by superconformal symmetry, and H is known as the reduced correlator. We can defined a reduced Mellin amplitude M from H, and translate the differential operator D into a difference operator D in Mellin space. Then we have which implements the superconformal symmetry at the level of Mellin amplitudes. The bootstrap problem is formulated by further imposing Bose symmetry, analytic properties and flat space limit on the Mellin amplitude M. Such algebraic bootstrap problems are highly constraining, and fix the correlators uniquely up to an overall constant. The bootstrap problem for AdS 5 × S 5 was fully solved in [15,16] for arbitrary four-point functions, and led to an extremely compact answer. The merit of this approach is that one can treat all external dimensions on the same footing, and obtain the correlators without computing any diagrams. However, the analytic structure of the reduced amplitude M is not as transparent as that of the full amplitude M. This makes it sometimes difficult to find a general efficient ansatz for M, such as in AdS 7 × S 4 , and the problem is solved only on a case by case basis [35,36]. Moreover, for d = 3 the differential operator D is non-local, which makes it difficult to interpret in Mellin space. Mellin superconformal Ward identities. Complementary to the above Mellin algebraic bootstrap method, is another Mellin space technique that can be applied to any spacetime dimensions, first developed in [36]. This method can be viewed as the Mellin space parallel of the position space method. We can translate (2.11), (2.15), (2.16) into with unfixed λ field , and M con will be taken as an arbitrary degree-1 polynomial in s, t, and a degree-E polynomial in σ, τ . Then we would like to impose the superconformal constraints from the superconformal Ward identities (2.10). This may appear difficult as only U and V appear in the definition (2.18), which is invariant under z ↔z. However, the superconformal Ward identity (2.10) breaks the symmetry of z andz, and creates complicated branch cuts when rewritten in terms of U and V . The observation of [36] is that we can take the sum of a holomorphic and an anti-holomorphic copy 6 Then the coefficients can always be written in terms of polynomials in U and V , which are easy to interpret as difference operators in Mellin space. These difference equations (graded by different powers of the spectator cross ratioᾱ) constitute the Mellin superconformal Ward identities. Imposing these identities, one fixes all the coefficients in the ansatz, up to an overall constant. Note that in Mellin space exchange Witten diagrams are easy to write down for any spacetime dimension and conformal dimensions. This greatly extends the range of applicability of this method. Using this Mellin space technique, [36] obtained the first four-point correlator in AdS 4 × S 7 for the stress tensor multiplet, where all other methods had fallen short. On the other hand, the method suffers from the same shortcomings as the position space approach, in that it is difficult to go beyond individual correlators. Other approaches. There are other methods for computing holographic correlators by incorporating bootstrap ideas. By using factorization and supersymmetric twistings, [49] computed the five-point function of one-half BPS operators in the stress tensor multiplet for IIB supergravity on AdS 5 × S 5 . In AdS 3 , there is also a method to construct fourpoint functions from the heavy-heavy-light-light limit, by using crossing and consistency with superconformal OPE [50][51][52]. This approach complements to the bootstrap method in AdS 3 [47]. Properties of the MRV amplitudes While the full Mellin amplitudes appear rather complicated, there are special limits where the amplitudes simplify drastically and give a hint for their underlying organizing principles. One such limit is the Maximally R-symmetry Violating (MRV) limit, introduced in [37]. In the ordering of k 1 ≤ k 2 ≤ k 3 ≤ k 4 , the (u-channel) MRV limit is reached by setting t 1 = t 3 for the auxiliary R-symmetry null vectors. This choice of null vectors means that in G(x i , t i ), t 1 cannot be contracted with t 3 and no t 13 can appear. In terms of the Rsymmetry cross ratios, it corresponds to setting σ = 0, τ = 1. We will denote the MRV amplitude as 7 MRV(s, t) = M(s, t; 0, 1) . (3.1) Note that the MRV limit can also be defined in other channels: in s-channel it corresponds to t 1 = t 2 , and in t-channel it amounts to t 2 = t 3 (case I) or t 1 = t 4 (case II). 8 The three limits are related by Bose symmetry. Restricting to the MRV limit suppresses certain Rsymmetry representations in that channel. For example, all the u-channel supergravity field exchanges are suppressed in the σ = 0, τ = 1 limit because the R-symmetry polynomials all contains at least one power of t 13 . This gives the first simplifying property of MRV amplitudes: The MRV amplitudes have no poles in the u-channel. Moreover, in such special R-symmetry configurations we are allowed to see the interesting phenomenon that the super primary is absent, whereas super descendants are present. 9 In particular, let us consider the long super multiplet where the super primary is a doubletrace operator of the schematic form [: In order for all super descendants (in particular the operator acted with Q 4Q4 which has maximal deviation in R-symmetry from the super primary) to have R-symmetry charges admissible in the tensor products of This implies that in the MRV configuration, the R-symmetry polynomial associated to {d 1 , d 2 } vanishes. Moreover, one can show that the only super descendant which contributes to this limit is Therefore we expect to see long operators (albeit not a super primary) in the u-channel MRV configuration with conformal twist at least (k 2 + k 4 ) + 4. This is reflected by the double pole at u = (k 2 + k 4 ) + 4 in the Γ {k i } factor in (2.18). Upon doing the inverse Mellin integral, we see a logarithmic singularity which is the hallmark of an unprotected long operator. On the other hand, this lower bound for logarithmic singularities cannot be further lowered, because (k 2 + k 4 ) is the minimal twist of the double-trace operators constructed from O k 2 and O k 4 for the super primaries of the long multiplets. This implies the second important property of the MRV amplitudes: The MRV amplitudes contain a factor of zeroes These zeroes are precisely needed to cancel one of the double poles in Γ {k i } , such that no logarithmic singularities at these twists show up. All MRV ampltidues These two properties of MRV amplitudes have profound consequences in understanding the structure of holographic correlators. In fact, the u-channel zeroes are satisfied by each individual super multiplet exchange in the s-channel (and separately, in the t-channel). This gives rise to an efficient way to fix the relative values of λ field inside each multiplet. More precisely, we choose the contact terms in the exchange Witten diagrams (2.19) by This choice corresponds to the so-called Polyakov-Regge blocks [53,54] (see also [55][56][57] for related blocks), which have improved u-channel Regge behavior For simplicity, we will focus on case I of (2. 3) in what follows, in addition to the ordering k 1 ≤ k 2 ≤ k 3 ≤ k 4 . However, in the next section when we assemble the ingredients into the final results and express them in terms of κ s , κ t , κ u , the expressions will be valid for any ordering of k i thanks to Bose symmetry. The SO(d) R-symmetry polynomials take the following values in the MRV limit Requiring the presence of the zeroes at every pole s = p + 2m imposes strong constraints on λ field , and solves them in terms of λ s Here we have added a superscript to the coefficients λ (p) field to emphasize that they belong to the p-th multiplet. Inserting the solutions into S (s) p in (2.24) leads to a great simplification. We obtain the following contribution from each super multiplet to the MRV limit where the u-channel zeroes are factored out, leaving just a sum over simple poles with constant residues. The terms in the brackets are just the scalar exchange Mellin amplitude at each simple pole, with f m, E defined in Appendix B. Notice that the MRV amplitude for each multiplet does not depend on the R-symmetry group SO(d). To write down the full MRV amplitude we just need to sum over all multiplets, which is restricted to be finite by the selection rules (3.7) The strength of contribution from each multiplet, captured by λ The three-point coefficients read [58][59][60] , (3.10) where where the number in the brackets is a gluing factor for the R-symmetry due to the fact that we have normalized the R-symmetry polynomials to have unit coefficient for σ E . The MRV amplitudes are then simply given by where with the summation over p inside the finite range (3.7), and MRV (t) (s, t) is related to MRV (s) (s, t) by Bose symmetry. Note that no additional contact terms are allowed in the MRV amplitudes. This follows from the simple fact that contact terms are at most linear in the Mandelstam variables, while the requisite zeroes are already quadratic. The absence of additional contact terms tells us something quite remarkable about the structure of supergravity theories in AdS: supersymmetry in the MRV limit not only determines the relative cubic couplings of components within the same multiplet, but its implication reaches quartic couplings as well. It is also worth pointing out that the MRV amplitudes have an improved u-channel Regge behavior compared to a Witten diagram exchanging a spinning field and with generic choices of contact terms. The MRV amplitudes behave in the same way as the Polyakov-Regge blocks. 10 Here we use n to collectively denote the numbers of M2, D3 or M5 branes, while in the literature it is more conventional to use N for the number of D3 branes. 11 The extremal three-point functions (i.e., k1 + k2 = k3, etc) are a bit subtle. The finiteness of the bulk effective action requires these correlators to be zero. On the other hand, these three-point functions are non-vanishing in the field theory, and are given by the above formulae. This puzzle is solved by realizing that the supergravity states correspond to a mixture of single-trace and double-trace BPS operators, with mixing coefficients fixed precisely by the vanishing of extremal three-point functions [61,62]. This subtlety, however, does not affect our discussion, because all the poles in the Mellin amplitude are sub-extremal and are agnostic about the subtlety. In position space, mixing can affect the four-point functions by adding certain rational functions formed by product of two-point and three-point functions. However, the mixing effect cannot be detected by the Mellin amplitudes because the rational terms have zero Mellin amplitudes. 4 All tree-level correlators from the MRV limit Full amplitudes from MRV amplitudes A lot more information can be extracted from the MRV limit. In fact, in constructing the MRV amplitudes we have determined all the polar part of the full Mellin amplitude. This follows from the fact that all R-symmetry polynomials (3.4) are non-vanishing in the MRV limit. We can therefore restore the full σ, τ dependence in (2.24) by using R-symmetry. 12 More precisely, we can write down where we have used the Polyakov-Regge blocks and it corresponds to a specific choice of contact terms. Various λ gives the correct residues for any σ and τ . However, note that the s-channel Polyakov-Regge blocks are not symmetric in t and u. More precisely, the Bose symmetry in exchanging 1 and 2 is broken by the choice of the contact terms. This can be easily seen from the fact that the s-channel Polyakov-Regge blocks have improved Regge behavior in the u-channel, but not in the t-channel. To restore the s-channel Bose symmetry in the s-channel multiplet exchange, we give the following simple prescription [37]. The amplitude S (s) p takes the form of a sum over simple poles at s = p + 2m. For each term in the sum, the numerator contains a quadratic factor in u of the form u 2 + α(i, j; m, p) u + β(i, j; m, p) . (4.2) We can restore Bose symmetry, by eliminating m from this factor from the relation where we have substituted the pole values of s into the relation among the three Mandelstam variables. This gives a symmetric s-channel exchange, which we will denote as S (s) p . Using the other generators of the Bose symmetry, we can similarly obtain S (t) p and S (u) p . Note that our prescription is not equivalent to simply using the Mellin exchange amplitudes from Appendix B, which have already been symmetrized (or anti-symmetrized), in (4.1). The difference is obvious in the MRV limit, as the symmetrized bosonic Mellin exchange amplitudes do not have improved u-channel Regge behavior. In principle, having specified the polar part of the amplitude there is still the possibility of adding contact terms. The truly distinguishing feature of our prescription, however, is that the full Mellin amplitude can be written as a sum of exchange amplitudes over multiplets, with no additional contact terms! 13 The Mellin amplitudes are just given by M(s, t; σ, τ ) = M s (s, t; σ, τ ) + M t (s, t; σ, τ ) + M u (s, t; σ, τ ) , obtained with the above prescription. The absence of the contact terms can be proven by the superconformal Ward identites, as we will discuss in detail in Section 5. Let us now rewrite the Mellin amplitude M(s, t; σ, τ ) into a different form that is more suitable for presentation. As we have seen, the Mellin amplitude has series of simple poles at s = p s + 2m, t = p t + 2m, u = p u + 2m, with A series of poles s = p s + 2m truncates if The sum over m is from 0 to m 0 − 1 or from 0 to n 0 − 1 if only one of them is integer. In the case when both m 0 and n 0 are integers, m is summed over from 0 to min{m 0 , n 0 } − 1. The truncation of poles in t and u is analogous. In the following we will write M s (s, t; σ, τ ) as a sum over poles, and we decompose the numerators into different R-symmetry structures spanned by the monomials of σ, τ The residues R i,j s 0 (t, u) are a sum over supergravity multiplets labelled by the Kaluza-Klein level p in the finite set (3.7) The other two channels M t (s, t; σ, τ ) and M u (s, t; σ, τ ) are similar, and can be obtained from M s (s, t; σ, τ ) by Bose symmetry. Using our method described above, we have calculated R i,j p,m (t, u) for all correlators in AdS 4 × S 7 , AdS 5 × S 5 and AdS 7 × S 4 . We will present their explicit expressions in the next subsection. All Mellin amplitudes for all maximally supersymmetric CFTs Let us define a set of convenient combinations u ± , t ± where we recall that = d−2 2 . We find that the residues from each multiplet take the universal form of R i,j p,m (t, u) = K i,j p (t, u) L i,j p,m N i,j p , (4.13) in any spacetime dimension, and we give below the expressions for K i,j p , L i,j p,m , N i,j p in each background. Let us begin with the case of d = 4, where the bulk theory is IIB supergravity on AdS 5 ×S 5 . The above procedure gives the following result , (4.15) and ] in the denominator. Since k i + k j − p ∈ 2Z + by cubic vertex selection rules, they implement the truncation of poles in the Mellin amplitude. All tree-level four-point functions for AdS 5 × S 5 were given in [15,16] after solving the bootstrap problem, and were written in terms of the reduced Mellin amplitude. The full amplitude can be obtained by acting with the superconformal difference operator R (see [15,16] for details). Upon comparing the residues, we find that above expressions reproduce the known result. Next we turn to d = 6, which corresponds to 11D supergravity on AdS 7 × S 4 . The full solution to all four-point functions were recently obtained in [37]. The residue factors are given by . (4.19) The Gamma functions in L i,j p,m also ensure that the number of poles in the AdS 7 × S 4 Mellin amplitudes is finite. Finally, we consider d = 3 and it corresponds to 11D supergravity on AdS 4 × S 7 . The only correlator which has been obtained in the literature is the four-point function of the stress tensor multiplet [36]. Here we present new results, which generalize to four-point functions of arbitrary one-half BPS operators . ] in L i,j p,m do not guarantee that the Mellin amplitudes should have a finite number of poles. Upon setting k i = 2, we reproduce the result of [36]. Clearly, the Mellin amplitude residues in the three maximally supersymmetric backgrounds are highly similar. In fact, we can accentuate their similarity by writing down a formula which interpolates the M-theory and string theory amplitudes. More precisely, we can modify K i,j p , L i,j p,m , N i,j p by introducing -dependence as follows . (4.25) When substituting in = 1 2 , 1 , 2, the above formulae reduce to the results in respective dimensions. Of course, such interpolation formulae that go through the three physical values are far from being unique, and we do not expect on any grounds that M-theory and string theory correlators should be physically connected. Nevertheless, what we wish to highlight is the similarities of analytic structures in the residues, which allow them to be compactly encapsulated in a single set of formulae. We also want to mention that the above sum over the multiplets p can be performed in a closed form, and leads to hypergeometric series. However, we think that it is better to leave the sum unperformed, which makes the analytic structure more clear. Another interesting case is the next-to-next-to-extremal correlators with k 1 = k 2 = 2, k 3 = k 4 = k. Let us only give the explicit result for AdS 4 × S 7 , which has not appeared in the literature. This family of correlators will be the starting point for constructing the four-point function 2222 at one loop. These correlators also have E = 2. Therefore, p = 2 for the s-channel exchanges while p = k for the t-and u-channel exchanges. We have where s + t + u = 2 + k, and Note that when k is odd the pole series in M AdS 4 22kk,s truncates, while if k is even this does not happen. Finally, let us give an example with higher extremality E = 3. We will consider the case with k i = 3. In the sum over multiplets, p now takes values 2 and 4 according to (3.7). Using our formulae, we get The other two channels are related by crossing symmetry WI in Mellin space In the previous section we have constructed the polar part of the general Mellin amplitudes for the backgrounds AdS 4 × S 7 , AdS 5 × S 5 and AdS 7 × S 4 , and claimed that no further contact terms are needed. In order to show that these contact terms are absent, we need to show that these amplitudes satisfy the superconformal Ward Identities (WI). Note that since the WI were not heavily used in our construction, this also serves as a non-trivial check of our results. In the cases of AdS 5 × S 5 and AdS 7 × S 4 one can efficiently impose the WI by requiring the existence of a reduced amplitude M, as discussed in Section 2.2. However, for AdS 4 × S 7 this is not possible. Below we will develop an efficient method to impose the WI in Mellin space at the level of the full amplitude, expanding on [36]. We start by recalling the WI (2.10) in space-time In order to write this relation in Mellin space we first note In Mellin space U ∂ U and V ∂ V have a very simple, multiplicative, action, which follows from the definition (2.18) On the other hand, z does not. In order to proceed we write the Mellin amplitude in terms of the R-symmetry cross ratios α,ᾱ and expand it in powers of α: In terms of the components M (q) (s, t,ᾱ) the WI take the form We can obtain an inequivalent relation by replacing z →z. Considering two independent linear combinations of the relations above we arrive at where we have defined ζ (n) The crucial observation is that, while z andz by themselves do not have a simple action in Mellin space, ζ (n) ± , which should be interpreted as operators, do. Indeed, for each n ζ (n) ± are simply polynomials of U and V , while powers of U and V act in Mellin space as shift operators. This leads to the following representation in Mellin space and so on, where U m V n is the shift operator corresponding to U m V n and is given by Note that for a given extremality E only operators up to ζ (E+1) ± appear. An example The simplest example is that of k i = 2, namely the correlator of the stress-tensor multiplet. So let's work out this case in detail. We will focus in the equation (5.7) involving ζ − , which has not been explicitly considered before. In this case the extremality E = 2 and we can decompose the Mellin amplitude as where the dependence onᾱ has not been explicitly show, since it acts as an spectator. The WI takes the form or explicitly after acting with the shift operators Note that this gives M (2) (s, t) in terms of M (0) (s, t) and M (1) (s, t). This is a general phenomenon: For a general extremality E, we can use the WI involving ζ − to solve for M (E) (s, t) in terms of the other ones. Returning to (5.15), for d = 4, 6 we can simply plug the results given in Section 4.3 and check that they indeed satisfy this relation for = 1 and = 2 respectively. For d = 3 we can resum the expression given in (4.28) to obtain Adding the contributions in the t-and u-channel we can obtain the corresponding expressions for M (q) (s, t), for q = 0, 1, 2. Plugging them into (5.15) we can check that indeed, the identity is satisfied for = 1/2. We have checked the above WI for a vast variety of examples. We have found our answer satisfies the WI in each case, without the addition of a contact term. This actually proves that the representation which we have chosen our results provides the full answer, and not just the polar part of the amplitude. WI and the flat space limit It is illuminating to study the superconformal Ward identities and the Mellin amplitudes around the flat space limit, where s, t are large. In the flat space limit shift operators act multiplicatively. Indeed in this limit M(s − 2m, t − 2n) ∼ M(s, t) plus higher order derivative corrections, and one can explicitly check s 2m t 2n (s + t) 2(m+n) + · · · . Plugging these expressions in (5.7) and taking the flat space limit, we observe the equation for ζ + is trivially satisfied to leading order, while the remaining equation gives But this simply implies that in the flat space limit as a consequence of the superconformal Ward identities, in any number of dimensions. The second relation follows from replacing α →ᾱ. From our results, we can study the explicit form of the amplitudes in the flat space limit. In all cases we find with s + t + u = 0 in the flat space limit, and Θ flat 4 (s, t; σ, τ ) = (tu + tsσ + suτ ) 2 . (5.22) is an R-symmetry polynomial explicitly given by Note that the form of the flat-space limit is completely universal, and the prefactor Θ flat 4 as well as the polynomials P {k i } (σ, τ ) do not depend on the number of dimensions. Furthermore, rewriting Θ flat 4 in terms of α,ᾱ and using s + t + u = 0 we obtain which neatly factorizes into a holomorphic and a anti-holomorphic part. Note that the presence of this factor implies the relations (5.21) indeed hold. For d = 4, 6 the presence of the prefactor Θ flat 4 (s, t; α,ᾱ) in the flat space limit has also been discussed in [63,64]. In those cases the solutions to the WI can be written as a shift operator acting on a reduced amplitude, and we can show that the flat space limit of such shift operator always contains the prefactor Θ flat 4 (s, t; α,ᾱ). Conclusion In this paper we developed a constructive method to obtain tree-level four-point holographic correlators in all theories with maximal superconformal symmetry. Our method exploits the remarkable simplicity of the Mellin amplitude at the MRV limit, in which hide new powerful organizing principles for holographic correlators. The construction of the full amplitude from this limit is universal for all spacetime dimensions, and allows us to derive results for different backgrounds on the same footing. For d = 4, our result constitutes a proof for a widely believed conjecture [15,16]. For d = 6, we reproduce the results recently reported in [37], and for d = 3 we provide new results. Our results lead to an array of interesting questions, applications, and avenues for future research. We list a few below. • The four-point functions we have constructed contain a wealth of CFT data. For d = 3, part of these data can be compared with other exact results from topological twisting and supersymmetric localization [65][66][67][68]. They can also be used to calibrate the numerical bootstrap bounds at large central charge [65,66,69]. • What we have done in this paper can also be viewed as the first step towards carrying out the program of computing loops in maximally supersymmetric supergravity theories, where the tree-level correlators gives essential input for applying the AdS unitarity method [23]. While this program is quite advanced in AdS 5 ×S 5 [24][25][26][27][28][29][30][31][32][33][34]70], it is still in its infancy for AdS 7 × S 4 [71]. Similar progress for AdS 4 × S 7 at one loop yet awaits being made. • In our construction we give a prescription for restoring Bose symmetry in the exchange amplitudes, which at the same time allows the full amplitude to be expressed as a sum over exchange amplitudes with no extra contact terms. The absence of contact terms is a clear indication of on-shell reconstructibility in AdS, and a similar phenomenon was also observed at the level of the five-point function [49]. It would be interesting to have a better understanding of the observed reconstructibility, which could be useful for finding efficient algorithms to construct higher-point correlators. • It would be very interesting to generalize what we have done to non-maximally supersymmetric CFTs in d > 2. Some initial progress using bootstrap methods has been reported in [46] for four-point functions of lowest KK modes. We expect that using the MRV limit will fix the contributions from within each multiplet more efficiently than imposing the Mellin superconformal Ward identities, and therefore streamlines the calculation for higher KK modes. Moreover, it would be interesting to see if the same prescription will continue to absorb the contact terms into the exchange amplitudes when there is less supersymmetry present. • We can also study various other limits of the general four-point correlators. One interesting limit is to take k i large, where we would expect to see the semiclassical behavior of membranes or strings scattering in AdS. • We have also initiated a study of the Mellin superconformal WI (and their solutions) around the flat space limit. It may be interesting to pursue this further to construct the solution to the WI for the d = 3 case, where the solution in position space contains non-local differential operators. • There has been some progress in understanding gravitational MHV amplitudes through twistor actions in the presence of a cosmological constant (see [72] and references therein). It would be very interesting to make a connection between that formalism and the results of this paper. For any fixed degree m − n the relation can be solved. Up to an overall factor the Rsymmetry polynomials take the form Y (a,b) mn = P (m−n) (σ∂ σ , τ ∂ τ )F 4 −m n + a + b + d 2 − 1 b + 1 a + 1 ; σ, τ (A.9) where we have introduced the Appell's generalized hypergeometric function F 4 F 4 a b c d ; x, y = In the above we have defined the shorthand notations
2020-06-24T01:01:11.648Z
2020-06-22T00:00:00.000
{ "year": 2020, "sha1": "ba8dd2cadda33e6e5c66abc4780a219398c83eda", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevX.11.011056", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d72705c7b56269a895d4ce8ad71534c4dbeb1146", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
199049965
pes2o/s2orc
v3-fos-license
Number of foetus in pregnant Mus musculus which was injected by anti Qa2 and given mild regular exercise: endothelial dysfunction animal model to induce preeclampsia Preeclampsia community guidline (PRECOG) defined preeclampsia as a condition which was identified by the diastol blood pressure ≥ 90 mmHg and proteinuria in ≥ 20 weeks of pregnant. Basic mechanism of preeclampsia was endothelial dysfunction. One of preeclampsia’s impacts was intra uterine fetal death. It could be signed by less number of foetus. One of ways to prevent preeclampsia’s process was mild regular exercise. This research’s goal was analyzing the effect of mild regular exercise to number of foetus in pregnant Mus musculus which was injected by anti QA2 as endothelial dysfunction animal model to induce preeclampsia. The design was experimental. It used 6 Mus musculus/group. The groups were control (normal pregnant/K1), pregnant Mus musculus which was injected by anti QA2 (endithelial dysfunction model/K2), pregnant Mus musculus which was injected by anti QA2 and given mild regular exercise since early pregnant (K3), and pregnant Mus musculus which was injected by anti QA2 and given mild regular exercise since 1 week before pregnant (K4). Statistical analyze used Kruskal Wallis Test (α=0,05). It showed there was no significant different of foetus’ number among all groups. The conclussion was there was no effect of mild regular exercise to number of foetus in pregnant Mus musculus which was injected by anti QA2 as endothelial dysfunction animal model to induce preeclampsia. Introduction Preeclampsia community guidline (PRECOG) defined preeclampsia as a condition which was identified by diastol blood pressure ≥ 90 mmHg and proteinuria in ≥ 20 weeks of pregnant [1]. Basic mechanism of preeclampsia was endothelial dysfunction, that could give global impact [2]. Preeclampsia could give impacts to mother and foetus. One of its impact in foetus was intra uterine fetal death and neonatal death [3]. One of signs that could show impact of preeclampsia was number of foetus. One of ways to prevent preeclampsia's process was mild regular exercise. Mild regular exercise could increase cardiorespiration function that was suitable for hypertension like preeclampsia [4]. Mild regular exercise would increase interleukin 6 (IL6) which would induce interleukin 10 (IL10) production as anti inflammation agent [5]. Mild regular exercise also increased endogen antioxidant such as superoxide dismutase (SOD), katalase, and glutathione peroxide [6]. Inflammation and oxidative stress were preeclampsia's mechanisms, so mild regular exercise was assumed to prevent endothelial dysfunction as preeclampsia's process. One of ways to make endothelial dysfunction in animal model was injecting anti QA2 to block placental QA2 expression. Placental QA2 expression was similar with placental human leucocyte antigen-G (HLAG) expression in human. Blocking of HLAG was predictor of endothelial dysfunction [7]. This research's goal was analyzing the effect of mild regular exercise to number of foetus in pregnant Mus musculus which was injected by anti QA2 as endothelial dysfunction animal model to induce preeclampsia. Methods This research was true experimental using post test only with control group design. This research used female Mus musculus that was mated by male Mus musculus 1:1. Female Mus musculus with positive vaginal plug were used in the research. The vaginal plug was the sign those female and male Mus musculus were mated and the pregnant was called 0 day. Mus musculus that were used must be 3 months, healthy, bodyweight 15-25 grams, well moving, no wound found in the body, and clear eye. This research used 6 pregnant Mus musculus/groups. The duration of research was 5 weeks, consisted of acclimatization, mating female and male Mus musculus, intervention, and termination. All of female Mus musculus were injected by pregnant mare serum gonadothropine (PMSG) and human chorionic gonadotropine (HCG) to equate oestrus cycle. Female Mus musculus was injected by 5 IU PMSG intra peritoneal, after 48 hours they were injected again by HCG 5 IU intra peritoneal. After that, female Mus musculus were mated by male Mus musculus 1:1. Tomorrow morning after mating, female and male Mus musculus were seperated. Female Mus musculus were examined if they had positive vaginal plug or not. Pregnant Mus musculus were who had positive vaginal plug, and randomize into 4 groups (6 pregnant Mus musculus/group). The location was in Laboratory of Embriology, Faculty of Veterinery, Airlangga University. This research consisted of 4 groups: K1 (control, normal pregnant), K2 (pregnant Mus musculus which was injected by anti QA2 as endothelial dysfunction model), K3 (pregnant Mus musculus which was injected by anti QA2 and given mild regular exercise since early pregnant), and K4 (pregnant Mus musculus which was injected by anti QA2 and given mild regular exercise since 1 week before pregnant). Mild regular exercise used treadmill with no angle. Treadmill used speed on 7 cm/second for 1 minute, 11 cm/second for 2 minutes, and 14 cm/second for 15 minutes. This exercise was done since early pregnant for K3, and 1 week before pregnant for K4. The exercise was done once in 2 days. The termination was done in the 19 th day of pregnant. The abdomen was dissected to open the uterus, and the number of foetus was calculated. Results and Discussion Data of this research was taken by calculating the number of foetus in the uterus of Mus musculus after termination. Mean of the number showed in table 1. Data was in normal distribution but not homogen, so it was analyzed by Kruskal Wallis Test. had infark and sclerosis of blood vessels. So, it caused the failure of endovascular invasion and inadequate of spiralis artery's remodelling (Figure 1). This condition made oxygen and nutrition to foetus decreased [8]. Decreasing of oxygen and nutrition could make intra uterine fetal death or resorbtion of foetus in Mus musculus, so the number of foetus became less. Figure 1. Comparation of placentation in preeclampsia and normal pregnant [8]. Table 1 also showed that number of foetus in K3 was less than K2. K3 was given mild regular exercise since early pregnant, and K2 was endothelial dysfunction animal model. It showed that mild regular exercise since early pregnant was not suitable enough for reducing the impact of preeclampsia to foteus. It could be caused by exercise in this case could not produce enough IL10 and endogen antioxidant as the protection of body to preeclampsia process. But if the exercise was done since 1 week before pregnant, it could increase the number of foetus. Exercise 1 week before pregnant could as early initiation of body process. If it was continued untill pregnant, it was a chronic effect that made positive effect to the body. So, the regular exercise 1 week before pregnant could produce enough IL10 and endogen antioxidant to prevent the impact of preeclampsia's process especially to foetus. Process of anti inflammation and antioxidant from exercise like Figure 2. Figure 2. Mechanism of exercise to prevent preeclampsia [9]. Enough IL10 and endogen antioxidant could activate endothelial cell to make its function became better [10]. So, the endothelial dysfunction did not happen, and endovascular invasion became adequate to distribute oxygen and nutrition to foetus. It prevented intra uterine fetal death. Table 1 showed that Kruskal Wallis test had no significant differences, so there was no different number of foetus in all groups. It showed that the production of IL10 and endogen antioxidant was not enough significantly to stabilize the pregnant Mus musculus with endothelial dysfuntion model. It was similar to Tomic et al. (2013) that showed there was no significant difference in intra uterine growth restriction and ather perinatal outcomes. In this research, the causes were decreasing of glucose level, frequency, duration and intensity of the exercise [11]. Another reason of no significant difference in this research was the type of exercise that was given to the groups. The exercise in this research was treadmill with no angle. Research of Kurniawati (2015) showed positive effect of exercise in pregnant women significantly, using aquarobic. Aquarobic was mild exercise, it could increase oxygen consumption and make the muscle strong. The result of her research was pregnant women in the third trimester had steady blood pressure and heart rate after given aquarobic for 1 month, twice in a week, and the duration of aquarobic in each exercise was 1 hour [12]. But we could see the different in descriptive data. So, the mild regular exercise actually had given positive effect but it was need to analyze more about the type of exercise to optimalize the effect. Conclussion The conclussion was there was no significannt effect of mild regular exercise to number of foetus in pregnant Mus musculus which was injected by anti QA2 as endothelial dysfunction animal model to induce preeclampsia.
2019-08-02T13:24:46.199Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "7b5e17f1177043fe4a3f71ad8aa11dbee589f8f6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1246/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a9239b917577722714f88318b32e1b49ed2823fa", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
210983880
pes2o/s2orc
v3-fos-license
A fully coupled computational fluid dynamics – agent-based model of atherosclerotic plaque development: Multiscale modeling framework and parameter sensitivity analysis A fully coupled computational fluid dynamics – agent-based model of atherosclerotic plaque development: Multiscale modeling framework and parameter sensitivity analysis / Corti, Anna; Chiastra, Claudio; Colombo, Monika; Garbey, Marc; Migliavacca, Francesco; Casarin, Stefano. In: COMPUTERS IN BIOLOGY AND MEDICINE. ISSN 0010-4825. 118(2020), p. 103623. [10.1016/j.compbiomed.2020.103623] Original A fully coupled computational fluid dynamics – agent-based model of atherosclerotic plaque development: Multiscale modeling framework and parameter sensitivity analysis here discussed in detail. Moreover, additional work was done to improve the model reliability, with peculiar attention directed to the sensitivity analysis of the ABM, performed to evaluate the model output in response to variations at the level of the input parameters. Indeed, due to the lack of a direct calibration of the driving coefficients of the model on experimental data, those coefficients were selected heuristically, leading to a certain level of uncertainty that needed to be quantified. The sensitivity analysis allowed us to understand the impact of the uncertain inputs on the model response, identifying the most influencing parameters, whose future calibration on experimental data will improve the model accuracy [29]. Finally, the results of this analysis provided further understandings of the model mechanisms, namely a verification of the ABM response with respect to the model laws, and the identification of an unexpected or not considered behavior. Figure 1 shows the structure of the complete multiscale framework that consists of four cyclically repeated steps [28]. First, an idealized 3D model of the lumen of a tortuous portion of healthy Superficial Femoral Artery (SFA) is built and a 3D mesh of the fluid dynamic domain is generated using ICEM CFD (v. 18 Specifically, depending on the WSS profile computed by the CFD simulation, the ABM replicates the physiologic or pathologic arterial wall remodeling occurring in a predefined time-frame (e.g. one week ABM simulated time). At the end of said period, here referred as coupling period, the ABM simulations are stopped to perform a new CFD simulation in the modified (i.e. updated) 3D geometry. Multiscale framework Indeed, the geometrical changes computed by the ABM in each 2D plane affect the fluid dynamic domain, implying the need to update the hemodynamics and the WSS distribution by coupling back the ABMs to the CFD module. To do that, a new 3D geometry of the lumen vessel is reconstructed by lofting the luminal curves of the M ABM outputs and the four main steps of the multiscale framework are then re-performed. The entire process stops at the end of a predefined follow-up period (e.g. at two months ABM simulated time). Each of the aforementioned steps, as well as the coupling process, require the user intervention. Fig. 1. Multiscale computational framework [28]. Starting from a 3D model of healthy artery, the physiologic/pathologic wall remodeling is simulated through a four-blocks scheme: (i) geometry preparation and meshing, (ii) CFD simulation, (iii) ABM simulation and (iv) new 3D geometry. The CFD and ABM modules constitute the multiscale core of the framework, acting on the second/tissue and weeks/cell scales, respectively. The multiscale core is based on the CFD and the ABM modules, embedded in the dashed red box in A step-by-step extended description is provided below with the following order: i) geometry preparation and meshing, ii) CFD simulation, iii) ABM simulation, and iv) retrieval of the new 3D geometry. Geometry preparation and meshing A simplified 3D geometry of the lumen of a healthy artery with SFA-like features was initially built using the CAD software Rhinoceros (v. 6 The 3D geometry was imported into ICEM CFD to generate the 3D mesh of the lumen for the CFD simulation. A hybrid tetrahedral mesh with five boundary layers of prism elements was created with the Octree method [30]. As global mesh parameters, an element maximum size of 0.39 mm was set and the curvature/proximity based refinement was enabled with a minimum element size of 0.156 mm and a refinement of 20 edges along a radius of curvature. The five layers of prism elements were generated with an exponential growth law, setting 1.05 as height ratio. Finally, the mesh was globally smoothed by imposing five smoothing iterations and a quality criterion up to 0.4. The resulting mesh, shown in CFD simulation Steady-state CFD simulations were performed using Fluent to compute the hemodynamics in the 3D artery model. Since the arterial wall remodeling, computed by the ABM, occurs in the time scale of weeks, while the cardiac output waveform is in the order of the seconds, cellular dynamics were assumed to depend on the average WSS. Accordingly, to avoid excessive time consumption, a steady flow was imposed to approximate the average hemodynamics. At the inlet cross-section, a constant parabolic-shaped velocity profile with physiological mean velocity was applied. The mean velocity was derived from the analysis of patient's Doppler ultrasound image at the SFA level [31], following a proper scaling to make it consistent with the current inlet area. At the outlet cross-section, a reference zero pressure was imposed. No-slip wall boundary condition was specified at the arterial lumen, assumed as rigid. A density of ρ=1060 kg/m 3 was set for blood, modeled as a non-Newtonian Carreau fluid, as in [32]. The simulation was run using the pressure-based solver with coupled method as pressure-velocity coupling method, least square cell-based scheme for the spatial discretization of the gradient, second-order scheme for the pressure and second-order upwind scheme for the momentum spatial discretization [33]. At the end of the CFD simulation, WSS profiles were extracted at pre-selected M = 10 internal circular planes perpendicular to the centerline ( Fig. 2A) and used as hemodynamic input to the corresponding ABM. The choice of M = 10 planes guaranteed a reliable reconstruction of the 3D vessel geometry. ABM simulation In general, starting from an initial homeostatic condition, the 2D ABM replicates a physiologic or pathologic arterial wall remodeling (depending on the WSS profile) on a vessel wall cross-section, by locally simulating cell mitosis/apoptosis, ECM production/degradation and lipid infiltration in the intima. The model was developed assuming that the risk factors promoting the disease were already present. Accordingly, the process of plaque formation in a specific region is purely driven by the hemodynamic conditions, and specifically by the WSS profile. The implemented ABM was inspired to the one developed by Garbey et al. [34,35] for the simulation of VGB post-surgical adaptation. However, different vessel structure and composition, as well as agent types and dynamics were implemented in the present work, which also deals with new cellular events. Figure 3 shows the ABM flowchart. Following the geometrical and hemodynamic initialization, at each time step of one hour, the model computes cell/ECM and lipid dynamics that drive the remodeling of the wall. Then, in order to retrieve smooth profiles and guarantee structural integrity, the lumen and external walls are regularized at each iteration until the end of the simulated period. To replicate the cellular and extracellular events, probabilistic behavioral rules were assigned to each agent and the simulations were performed with Monte Carlo method, allowing capturing the intrinsic variability of biological processes. Due to the stochasticity of the present ABM, the output of a single simulation cannot be considered as a representative solution. Thus, N = 10 independent simulations were run starting from the same initial condition and the average trend was evaluated. The choice of N was dictated by the need of a reasonable trade-off between computational time and minimization of the standard deviation. A basic solution was generated by opportunely calibrating the agent dynamics in order to stabilize the system around an equilibrium working point. This condition, representative of the homeostatic state of a healthy artery, was then perturbed to simulate the process of atherosclerotic plaque formation. Prior to the building of the fully coupled framework, the ABM behavior was verified both under physiologic and atherogenic conditions. For this purpose, a single CFD-ABM coupling was performed to initialize the ABM with the hemodynamic input and the ABM simulations were run for two months. Differently, within the fully coupled framework, the ABM running time corresponded to the chosen coupling time between ABM and CFD modules. The ABM simulations were run on a 16.00 GB RAM CPU, Intel® Core™ i7-4790, with 4 Cores and 8 Logical Processors. Details on the ABM initialization, agent dynamics and geometrical regularization are provided below. Initialization. The ABM was implemented on a 2D <130 x 130> hexagonal grid, representing a good compromise between affinity to the isotropic reality and level of complexity and computational efforts. The initial geometry is a 2D circular cross-section composed by 3 concentric layers, i.e. tunica intima, media and adventitia ( Fig. 4A) with the internal and external elastic laminae (IEL and EEL) separating the intima and the media and the media and adventitia, respectively (Fig. 4B). While sites in the lumen and in the external portion to the wall are initially empty, each site within the wall is randomly seeded with a cell or an ECM, coherently with the cell/ECM ratio of each layer [36][37][38]. Intima and media are initialized with SMCs and elastin and collagen as ECM, with a SMC/ECM ratio of 0.72 [36] and a collagen/elastin ratio of 0.63 [38], while fibroblasts and collagen fill the adventitial sites with a ratio of 0.43 [37]. Figures 4C and 4D show cellular and extracellular composition of intima, media and adventitia layers. For simplicity, no differentiated behaviors between SMC and fibroblasts or between elastin and collagen were implemented. Accepted manuscript at https://doi.org/10.1016/j.compbiomed.2020.103623 13 As previously mentioned, the 2D ABM is informed with an initial WSS profile, which can potentially trigger a pathologic vascular remodeling by perturbing the baseline cellular activity and favoring lipid infiltration and accumulation within the arterial wall [3,4]. Indeed, a low WSS affects the endothelial function by down-regulating atheroprotective genes and up-regulating the atherogenic ones, eventually promoting atherosclerotic plaque formation [4,39]. Accordingly, each site i of the lumen wall is initialized with a WSS value obtained from the 3D CFD simulation, and a level of endothelial dysfunction is computed as follows: is the WSS at site i and 0 = 1 is the assumed pathologic/physiologic WSS threshold. 0 was set in accordance with the study of Samady et al. [40], in which areas exposed to WSS lower than 1 Pa developed greater lumen area reduction. Moreover, the chosen threshold agrees with the physiological range of WSS in the SFA, identified to be between 1.5-2 Pa [41]. In the ABM, each dysfunctional endothelial site i, with ≠ 0, triggers a state of alteration that diffuses within the intima through isotropic diffusion, from a peak of intensity with a diffusion constant : from different endothelial sites are summed up to define the global level of inflammation of the k-th site , as follows: NL is the initial number of sites of the lumen wall (i.e. endothelial sites) and the resulting affects the agent dynamics, as described below, promoting atherosclerotic plaque formation. Since the purpose of the present model was not to accurately replicate the mechanisms of endothelial dysfunction and the early inflammatory processes occurring during atherogenesis, the endothelium and inflammatory cells or molecules were not explicitly modeled. However, Eqs (1) - (3) were implemented to capture the key role of the hemodynamic input in the pathogenesis of atherosclerosis, whose effect, thanks to the mediation of the endothelial layer, is transferred to the interior sites. Specifically, if all the WSS values at the i-th sites are greater than the threshold, = 0 ∀ and = 0 everywhere. Under this condition, defined atheroprotective, the homeostasis of a healthy artery is replicated. On the contrary, if there is at least one site of the lumen wall exposed to a When a site containing cell/ECM is accessed (i.e. when it is in its potentially active state), a Monte Carlo simulation determines whether the potential event is happening or not, as shown in the "event assessment" phase of Fig. S1. The CPU generates a random number ∈ [0; 1] that is compared with the probability of the event itself, , and the event occurs if < . The cellular events of interest are, as baseline of activity, mitosis/apoptosis for cells and deposition/degradation for ECM, and, only under atherogenic condition, the ABM also simulates the process of lipids infiltration. In Fig. 5, the workflow adopted for the implementation of the agent dynamics and the corresponding parameter setting is shown. A detailed analysis of the biological processes occurring during atherosclerosis initiation and progression was performed, with a focus on the cellular, extracellular and lipid dynamics, which led to the definition of their probabilistic behavioral rules. The final probability equations depend on a set of coefficients α i , adopted to weigh a specific influencing factor in the global agent behavior or to set the probability in the interval (0:1). The numerical value assumed by those coefficients was derived through an iterative process in which the output of the ABM was verified in terms of integrity and qualitative resemblance to histological or literature evidences. In the present section, the values obtained from the aforementioned process are listed below each equation and tagged as default values. This procedure allowed us to obtain a reasonable range for each coefficient. However, since they were not experimentally derived, they are associated with uncertainty, which is reflected in the model output. For this purpose, a sensitivity analysis of the α i parameters was performed and detailed in section 2.2.1. In future works, the parameters α i that emerged as driving ABM coefficients will be calibrated against experimental data and the ABM will be finally validated. Before implementing the pathological cellular dynamics, the baseline densities of probability were set for cell mitosis/apoptosis and ECM deposition/degradation to replicate the physiological conditions. They were defined with Eq. (4) and Eq. (5), respectively: 1 , 4 and β were imposed to guarantee the maintenance of the physiologic cell/ECM ratio defined at the initialization phase for each tissue layer. 1 was introduced to obtain a baseline probability of said event within a realistic unit of measure [34]. While cell agents are responsible for cell mitosis and apoptosis and ECM production, ECM agents are involved in ECM degradation, meaning that the code scans the grid looking for cells or ECM, respectively. It results that, due to the prevalence of ECM on cells, the model has the tendency to preferentially degrade ECM, instead of producing it. Accordingly, to replicate a baseline condition where ECM production and degradation are averagely balanced, an adjusting coefficient β was introduced and calibrated for each layer. Specifically, since the intima and the media layer have the same cell/ECM ratio = 0.72, a single βint/med was defined for these two layers, while a different value, βadv, for the adventitia, being the adventitia composed by a cell/ECM ratio of 0.43. To this aim, ten ABM simulations were run under physiologic conditions with several tentative βint/med and βadv values and, at a 2-months follow-up, the ratio between final and initial ECM, , was computed for each layer. Considering the work of Garbey et al. [34], a first set of ten simulations was run with initial guesses of βint/med = 2.13 and βadv =2.5 and provided > 1 in the intima and media layers and = 1 in the adventitia layer. Other five values corresponding to 1, 1.5, 1.6, 1.75 and 2 were investigated to calibrate βint/med and, by interpolating the vs. plot in correspondence of = Fig. 6. Therefore, β = {1.57,1.57,2.5} were set for intima, media and adventitia, respectively, in order to guarantee stable trends of ECM in each layer under baseline conditions. With said calibrated coefficients, Eq. (4) and Eq. (5) drive the physiological wall remodeling, leading to the replication of the homeostatic state of a healthy artery. Differently, under atherogenic conditions (i.e. when at least one site of the lumen wall is associated with a < 1 ) cellular mitosis and ECM production in the intima are perturbed to model the increased cellular activity involving the intima layer during atherosclerosis. This translates into a modification of the probability densities defined with Eq. (4) and Eq. (5). Specifically, the probability of cell mitosis and ECM production in the intima increases with the inflammation level, the number of neighboring lipids and the closeness to the lumen [42], leading to the following: plaque formation under atherogenic condition, so that the pathological processes in the planes exposed to atherogenic WSS profile arose within two simulated months. This choice, although not realistic, was dictated by the need to reduce the elevated computational time. Under atherogenic conditions, the ABM also implements the process of lipid infiltration in the intima. In order to simulate an earlier adaptive intimal thickening [3], lipid dynamics is activated once the intima thickens over a given threshold, here set as IT=6 sites. Since circulating low density lipoproteins were not explicitly modeled, the probability of lipid infiltration is computed as the probability of a site k at the lumen wall to allow lipids to invade the intima, expressed by: where 5 = 0.05 sets the event probability in the interval (0;1). The terms 6 exp(− ) and (1 + 7 ) promote lipid clustering, by increasing the probability of a lipid to occupy a site k close to another lipid, whose distance is and whose neighboring lipids is, in turn, . 6 = 10 weighs the distance term between k and its closest lipid, and 7 = 6 is a normalization constant to maintain the ratio in the interval (0;1). Also in this case, the terms and coefficients of Eq. (8) were set following the framework in Fig. 5 to obtain a lipid core resembling histological features [43]. At each time step only one lipid can enter the intima. To determine the site of access for the lipid, the ten sites of the lumen wall with higher are explored. Starting from the most probable site Monte Carlo simulation is applied and if > , then k is the designated site of access, otherwise the following site of the list is investigated, up to ten. This translates in assuming that a lipid has a total number of chances, = 10, to migrate into the intima. In the present work, it was assumed that lipids might continue entering the intima until the lipid core potentially occupies maximum 15% of the lumen area. Although not representative of real biological mechanisms, this condition allowed replicating a progressive growth followed by a stabilization of the lipid core, which agrees with the choice not to simulate the phenomenon of plaque rupture, more likely associated with a continuous growth of the lipid-rich necrotic core [44]. Moreover, in PAD, which is the context of the current study, atherosclerotic plaques are usually characterized by a smaller lipid core than in coronary artery disease, and are less subjected to rupture, which further corroborates the assumption on the lipid core size [43]. Tissue plasticity and geometrical regularization. To accommodate the production or removal of an element while performing agent dynamics ("event manifestation" phase of Fig. S1), the tissue reorganizes by following a minimum energy principle, according to which agents move along the shortest path to the target site [34]. In case the active site is at the luminal or external wall border, the production of a cell/ECM results in the addition of a new agent, positioned in a random empty space surrounding the active agent itself. In same condition, if the active agent undergoes death or degradation, it is simply removed from the computational domain leaving an empty space. Differently, in case of an agent inside the arterial wall, a pushing or pulling movement of the surrounding elements allows the production or removal of an element. For example, in case of element production, a site adjacent to the mitotic/synthetic cell is freed thanks to the movement of the surrounding elements either towards the lumen or the exterior, respectively if the site is in the intima or in the media/adventitia. Similarly, when an element is removed, its site is occupied by an inverse movement of the neighboring agents. Agent movement must always comply with the minimum energy principle, with the only exception constituted by the presence of lipid agents along the shortest path. Once they enter the intima layer, lipid agents must maintain their position throughout the entire simulation, thus constituting an obstacle to the movement of the surrounding elements. Consequently, the agent movement is performed along the shortest path that does not involve lipid agents, allowing preserving the lipid core. Retrieval of the new 3D geometry Within the fully coupled framework, at the end of the ABM simulation period, = 10 output solutions, different in terms of morphology, composition and plaque features, are obtained for each of the = 10 cross-sections of the artery. Accordingly, an innovative method was developed to select for each cross-section the output configuration that mostly resembled the corresponding average solution in terms of i) lumen radius, ii) external radius, and iii) plaque size. The procedure, described below, allowed building, at the end of each cycle of the framework in Fig. 1 where each j-th quantity is weighed by . The same criterion was applied for all the = 10 cross-sections and the 3D geometry was finally reconstructed in Rhinoceros by lofting the lumen profiles of the selected configurations. ABM sensitivity analysis Since the model output is largely affected by the parameter setting and none of the parameters was derived from experiments, a sensitivity analysis of the ABM parameters was performed. The goals were (i) to evaluate the oscillation of the model solution due to the uncertainty of the parameters or to the possible inter-subjects variability and (ii) to identify the parameters that mainly drive the ABM output. For this purpose, first a mono-parametric and then a multi-parametric sensitivity analyses were carried out, as detailed below. In both analyses we defined v = {α2, α3, α5, α6, α7, IT, trylip} that consists in the parameter set under investigation. α1 and α4 were not included in the analysis because they were already calibrated in [34]. For each parameter, a triangular probability density function was defined, based on the parameter range and its most probable value, as shown in Tab Aligned with the purpose of the current analysis, all the ABM simulations were initialized with the same WSS profile, corresponding to the one computed at the 9 th plane of the SFA-like geometry ( Fig. 2A). Indeed, being the most critical hemodynamic scenario, it activates a prompt and intense atherogenic response, allowing appreciating the effects of parameter variation in the ABM response within just one month of follow-up. This is convenient, considering the high computational costs required by the sensitivity analysis. Moreover, since the focus of the analysis was the ABM, the hemodynamic update was not considered, but only 2D ABM simulations were run. Mono-parametric sensitivity analysis. The probability density function of each parameter was divided in five equal probability intervals and the medium value for each interval was considered. Moreover, two additional values were included in the analysis to explore the ABM behavior at the extremes of the parameter range, thus investigating seven values for each of the seven parameters, shown in Tab. 2. As mentioned above, in this analysis only one parameter at a time was varied, while keeping all the others at their default values, resulting in 49 cases, each with ten replicates. The results were analyzed in terms of lumen area and intimal content of SMCs, ECM and lipids. Indeed, the intima is the layer that is mostly affected by the pathologic wall remodeling occurring in atherosclerosis. The statistical analysis of the results was performed in Matlab. Multi-parametric sensitivity analysis. Latin hypercube sampling (LHS) was adopted to randomly sample the triangular probability density function of each parameter and define the parameter set for the ABM simulations [45]. This method allows exploring the entire range of each parameter and achieves good accuracy with a limited number of simulations compared to simple random sampling [45]. In this study, the probability density functions of the j=7 {α2, α3, α5, α6, α7, IT, trylip} parameters were divided into k=10 equal probability intervals and the LHS matrix (k x j) was generated, identifying the k=10 ABM parameter combinations: For each k-th parameter set, ten simulations were run to account for the inherent stochasticity. Partial Rank Correlation Coefficients (PRCC) were computed to quantify the correlation of the target outputs (i.e. lumen area and intimal content of SMCs, ECM and lipids) with each parameter, while removing the effect of the remaining parameters. To compute the PRCC, the average target outputs of the ten replicates for each k-th simulation was considered [45]. PRCC can span from -1 to +1, corresponding to a perfect negative/positive correlation, respectively, and a p-value is associated to each correlation to assess the statistical significance. Correlations were considered statistically significant if the corresponding p-value was lower than 0.05. Sensitivity analysis of the coupling time Back to the fully coupled CFD-ABM framework, an important decisional step was about the definition of the coupling time, namely at which time step the ABM simulations need to be paused to update the hemodynamics according to the geometrical changes. A short coupling period allows a better control of the model, but implies high computational time and efforts. Accordingly, a compromise between accuracy of the results and computational effort must be reached. To this aim, we developed an innovative technique based on a sensitivity analysis that was performed to assess the influence of the coupling time on the output, by testing three different cases on a 14 days follow-up period. In the first two cases, the ABM was coupled back to the CFD with a frequency of 7 and 3.5 days, respectively, while, in the third case, the first coupling was performed after 7 days, and then every 3.5 days. For each cross-section, the temporal evolution of the lumen area predicted in the three cases was evaluated, as well as the ABM simulation mode at each coupling interval, which can be either physiologic or pathologic, depending on the WSS profile computed at the corresponding coupling step. ABM replication of homeostasis and atherosclerotic plaque generation The ABM accurately and robustly replicated both the homeostatic condition of a healthy artery and the formation of an atherosclerotic plaque, when subjected to the hemodynamic stimuli. Figure 8 shows a qualitative comparison between histology and the ABM output, with the latter selected among the N independent runs for plane 1 (Fig. 8A) and plane 5 (Fig. 8B), for visualization purposes. When initialized with a physiologic WSS profile, the ABM output on a 2-months simulation did not show any substantial deviation from the initial configuration, as depicted in Fig. 8A. The slight alteration of the wall profile was within the physiological range, guaranteeing the preservation of the lumen and tissue layers areas. On the contrary, under atherogenic conditions, the ABM developed an atherosclerotic lesion with features resembling histological evidences, as shown in Fig. 8B. Both the ABM atherosclerotic output and the corresponding histology presented an asymmetric geometry, due to a focally localized thickening of the intima layer and the formation of a lipid-rich core (Fig. 8B). However, in the ABM output (Fig. 8B) the thick layer of fibrous intimal tissue covering the lipid core was not present because lipids were still migrating in the intima at the stage of the ABM configuration in Fig. 8B. By implementation, such layer would form once the process of lipid infiltration arrests and SMCs and ECM remain the only active agents, coherently with the fact that lipid core formation precedes the increase of fibrous tissue [46]. However, in the present work, the tissue layer separating the lipid core from the lumen was not fibrotic but normal intima, namely SMCs and ECM. Finally, in the ABM solution, maintenance of baseline thickness and composition of the media and adventitia layer was in good agreement with the histological image. refers to a femoral artery of a 75-years old male subject [47], while in (B) to a coronary fibrous cap atheroma of a 24-years old man [48]. For each plane, the analysis of the temporal evolution of the ABM simulations and outputs allowed a further evaluation of the model dynamics and robustness, both in physiologic and pathologic mode. Under physiologic condition, stable trends of total cells, ECM and wall area were observed, and final healthy configurations were generated (Fig. S2). As shown in Figs. S2-3, although the variability attributable to the inherent stochasticity, a good agreement among the outputs was appreciable and indicative of a robust replication of homeostasis. Focusing on the atherogenic condition, further results are provided for the vessel cross-sectional plane 5 in Fig. 9. However, similar considerations apply for all the other cross-sections showing plaque formation. In Fig. 9, the results of the ten ABM simulations of arterial wall remodeling of plane 5 on a 2-months follow-up are illustrated in terms of temporal dynamics (Fig. 9A) and final ABM configurations (Fig. 9B). The normalized temporal trends of intimal, lumen, medial and adventitial area are provided, pointing out the monotonic decrease in lumen area due to the intimal thickening and the stability of media and adventitia layers. A considerable variation among the simulations involved the most active dynamics, i.e. the luminal and intimal areas, while little to negligible deviation was observed in adventitial and medial areas, respectively (Fig. 9A). The ABM output solutions at 2 months are provided in Fig. 9B for each of the a,...,j simulations. All the configurations replicated a pathologic wall remodeling with plaque generation, the latter shown in yellow. Although the intrinsic variability among the outputs due to the stochastic nature of the model, all the solutions agreed in terms of degree of stenosis and plaque size, location and morphology, as well as unaltered media and adventitia. Finally, the severity of the replicated pathology was proportional to the degree of atherogenic character of the WSS profile. In Fig. 10, the temporal evolution of an ABM cross-section out of ten is shown for planes 4, 5 and 9, providing an example of the ABM sensitivity to WSS. The percentage of lumen wall exposed to WSS < 1 Pa was 0.8, 16.5 and 52.6, respectively, while the lowest recorded WSS was 0.98 Pa, 0.69 Pa and 0.10 Pa. These WSS profiles triggered wall responses with different degree of intensity. Specifically, in plane 4, after two months the process of lipid infiltration was only at the beginning, with few lipid agents in the intimal layer, differently from plane 5 and 9, where lipids started migrating into the intimal layer within the first month, leading to the generation of a well discernible lipid core. In plane 9, the pathologic wall remodeling was faster than plane 5; moreover, although after one month the lipid core did not change significantly, the intima continued to grow, also thickening the layer between the lipid core and the lumen. At day 60, the configuration of plane 9 presented a more critical scenario compared to plane 5, in which the size of the lipid core was not stabilized yet and the lumen area was still largely preserved. Coherently with the definition and classification of advanced atherosclerotic plaque proposed by Stary et. al [46], the ABM configuration of plane 5 at day 60 (same as Fig. 8B) qualitatively resembles a type IV lesion, characterized by a dense accumulation of lipids without substantial lumen area change, and might progress to a condition of type V lesion, in which ECM is the major plaque component (as Fig. 10, plane 9 at day 60). The lumen stenosis at the end of the two months follow-up were 10%, 20% and 80% for plane 4, 5 and 9, respectively. Mono-parametric sensitivity analysis Among the investigated parameters, the performed analysis pointed out the presence of one most influent parameter, α2, whose primary effect on the SMC and ECM dynamics propagated to the lipid dynamics and largely affected the predicted lumen area reduction. Figure 11 provides details on the model sensitivity to α2 in terms of intimal SMC (Fig. 11A), ECM (Fig. 11B), and lipids (Fig. 11C) and lumen area (Fig. 11D) showing, for each studied output, the temporal trends along the simulation and seven box plots at one month of follow-up. In the graphs reporting the time evolution of the variables, for each considered value of the parameter, the colored bold line represents the median trend and the associated band is the interquartile range (IQR 25 th -75 th percentiles). On the right, the corresponding box plots describe the data distribution of the specific variable at the end of the simulation obtained with the specific value of α2. Acting on SMC and ECM dynamics, α2 indirectly affected also lipid dynamics by anticipating or delaying the process of lipid infiltration in the intima, whose starting moment is clearly evident in Fig. 11C and corresponds to the point on the x-axis at which a number of lipids greater than 0 is first observed. Specifically, the greater α2, the more SMC proliferation and ECM production were promoted, with an observed increase in such event rates and, obviously a higher intimal content of SMC and ECM at the end of the simulation (Figs. 11A-B). In turn, an augmented SMC proliferation and ECM production led to a faster thickening of the intima and, as consequence, to an earlier invasion of lipids in the wall (Fig. 11C). As expected, the starting point for lipid infiltration influenced the number of lipids in the intima observed at one month of follow-up. However, a saturation of the lipid content was observed with α2={2.436; 2.854}, due to a control on the lipid core size introduced in the lipid dynamics algorithm. Moreover, as natural consequence of the large effect that α2 has on the agent dynamics, the lumen area was considerably influenced by such parameter (Fig. 11D). An increase in α2 enhanced lumen area reduction rate, leading to the most critical scenario (i.e. smallest final lumen area) associated with the highest α2=2.854. Significant differences among the data distributions associated with different values of α2 were detected for all the studied outputs (p<0.05), and details on the multiple comparisons are provided in the supplementary material (Tab. S1). Finally, the results shown by the box plots pointed out a clearly monotonic relationship between α2 and the studied outputs. The graphs of the temporal dynamics and box plots of each considered output at the variation of the parameters {α3, α5, α6, α7, IT, trylip} are provided in the supplementary materials with the same modalities used for α2, see Figs. S4-9 and Tabs. S2-5. Within the studied range, parameters α3 and α7 did not show any significant influence neither in the agent dynamics, nor, as consequence, in the lumen area (Figs. S4 and S7). While the global model output, represented by the lumen area, was affected almost exclusively by α2, which emerged as the driving parameter, the subset of parameters {α5, α6, IT, trylip} was identified to significantly influence lipid dynamics with minor or no effects in the other model outputs. Figure 12 provides the lipid temporal dynamics along one month of simulation and box plots of the final intimal lipid content to the variation of the parameters α5 (Fig. 12A), α6 (Fig. 12B), IT (Fig. 12C) and trylip (Fig. 12D). As regards α5, while in the range between 0.039 and 0.105, no significant differences in the final lipid content were detected, α5 significantly affected that output when decreased below 0.039, with almost inhibition of the lipid intimal infiltration for α5=0.001. The effect of α5 on the lipid dynamics slightly propagated to the ECM dynamics, producing a reduced final ECM content in the intima for α5=0.016 and α5=0.001 (p<0.05) (Fig. S5 and Tab. S2). Similarly to α5, also α6 had an effect on the lipid dynamics only when decreased to 0.245, although in this case a slight reduction of the lipid infiltration rate was produced with a minor consequence to the final lipid content, compared to α5 ( Fig. S6 and Tab. S3). Furthermore, as expected, IT largely affected the final number of lipids, by controlling the starting point of the infiltration process. However, no propagation to the other dynamics was observed, except for a single significant difference recorded in the final content of ECM between IT=3.728 and IT=16.85, as shown in Fig. S8 and Tab. S4. Additionally, the number of chances for a lipid to invade the intima, trylip, acted on the infiltration rate, by increasing the probability that at each time step a lipid successfully enters. As for IT, only a significant difference was recorded in the final SMC content, between trylip=3 and trylip=20 ( Fig. S9 and Tab. S5). As previously mentioned for α2, a saturation of the lipid content may occur due to a control on the maximum number of lipids. Finally, none of these parameters had an influence on the lumen area, meaning that lumen area reduction is mostly due to an augmented SMC proliferation and ECM production, rather than lipid accumulation in the intima. Figure 13 shows the PRCCs between the target model outputs and each input parameters {α2, α3, α5, α6, α7, IT, trylip}. Coherently with the mono-parametric sensitivity analysis, the final intimal content of SMC, ECM and lipids and the final lumen area were the investigated outputs. Although only values of PRCC associated with a p<0.05 were considered as statistically significant, also PRCC values with p≈0.06 were taken into account as weakly significant. In accordance with the previous analysis, α2 was identified as the most influencing parameter with significant highly positive correlations with the final amount of ECM and lipids in the intima and a significant highly negative correlation with the lumen area (p<0.05). α2 highly correlated also with the final content of SMC, but associated with a weak significance (p=0.068). Moreover, the remaining parameters were not found to significantly correlate neither with SMC and ECM intimal content, nor with the final lumen area, although a slight influence was recorded in some cases by the mono-parametric analysis. Finally, high correlations with weak and high significance were detected between α6, IT, trylip and the final content of lipids, with α6 and trylip exhibiting positive correlations (weakly and highly significant, respectively), while IT a negative weakly significant one. CFD-ABM coupling period As regards the temporal trend of the lumen area along the total 14 days of simulation, no differences resulted from the choice of a smaller or larger coupling period. Indeed, for each plane, the standard deviations of the curves obtained with the three case studies were connected or partially overlapped, meaning that the error committed by adopting the greatest coupling time (i.e. 7 days) was in the range of the noise of the stochastic ABM simulations. However, two different scenarios emerged in terms of influence of the coupling period on the ABM simulation mode, which can be physiologic or pathologic depending on the WSS profile computed at each coupling step. Indeed, while for planes where the minimum WSS values were far from 1 Pa, there was complete agreement in the ABM simulation mode along the 14 days (Fig. 14A), discordance was observed in planes where the lowest WSS values were close to 1 Pa (Fig. 14B). In the last scenario, the shortest coupling period (i.e. 3.5 days) allowed catching switches between the physiologic and pathologic mode, ignored by the other two cases (see For each plane the output is analyzed in terms of normalized lumen area over time (curves) and simulation mode, i.e. physiologic/pathologic, along the 14 days of follow-up, for each case study a) coupling every 7 days b) coupling every 3.5 days and c) first coupling after 7 days and then every 3.5 days. Discussion Several computational studies have already used ABMs to simulate vascular adaptation processes in response to the alteration of the baseline working conditions [16,17,26,27,34,[18][19][20][21][22][23][24][25]. In particular, in [16][17][18][19][20][21][22][23][24] ABMs of the arterial wall remodeling following stenting procedure were implemented to investigate the mechanisms of in-stent restenosis. However, in those works, the intervention procedure was simulated on a healthy artery, thus neglecting the underlying pathology which, instead, we consider to have a role in the final outcome. Herein we presented a novel framework that simulates the process of atherosclerotic plaque development in relation to the hemodynamics by coupling 2D ABM to CFD simulations in a 3D vessel geometry. Furthermore, we performed a detailed sensitivity analysis to identify the driving coefficients of the coupled model, thus laying the foundations for a future quantitative calibration and validation based on experimental and/or clinical data, which were not addressed in the present work. The strengths of the present framework are (i) the inclusion of important biological aspects related to the disease, i.e. cellular and extracellular dynamics, and (ii) the definition of a loop where molecular and tissue levels are strictly interconnected, and the effect of a perturbation applied to one node is directly reflected on the other ones. The framework is modular and versatile. After a proper experimental calibration and subsequent validation, it might allow the implementation of additional phenomena, such as drug therapies or intervention procedures, on a model of diseased artery. For instance, the effect of anti-proliferative or LDL-lowering drugs (e.g. statins) might be investigated, either by introducing advection-diffusion-reaction equations for those species, or by including them in the model as agents. In both cases, the current agent rules should be modified to take into account the mutual influence between their dynamics and the newly introduced factor/agent. In this context, it might be possible to study a potential plaque stabilization or regression [49]. Moreover, the model might serve to create a virtual population of patients with different patterns (e.g. size and location) of lipid core, degree of stenosis and, once implemented, fibrotic cap and calcifications. Indeed, although until now the model only includes SMCs, ECM and lipid agents, implementation of calcifications is currently under investigation. This, together with further improvements and an experimental validation, might allow investigating the effects of such patterns on the lesion progression and on the intervention outcomes. Finally, by using previously developed in house techniques [14,50], the framework might be also informed with monocyte related gene expression data following a specific treatment. All these aspects are thought to represent a turning point in terms of ability to predict the treatment outcome. To the best of our knowledge, the only work that partially resembles the proposed framework is that by Bhui et al. [27] in which a 3D ABM was coupled to CFD simulations to simulate the WSS-driven leukocyte trans-endothelial migration and subsequent plaque formation. However, differently from their model, in our ABM plaque volumetric growth was mainly due to SMCs proliferation, ECM production and lipid accumulation (Figs. 8B, 9B and 10), rather than leukocytes infiltration. This is in accordance with the study of Doran et al. [51], in which a key role to SMCs was recognized, and, the research of Stary et al. [46], who identified ECM as the major extracellular component of fibroatheromas after lipids. Finally, while in the model of Bhui et al. [27] wall remodeling was simulated according to Glagov's phenomenon [52], in the present work, for simplicity, only the stenotic effect of plaque growth was considered. Consequently, a monotonic trend of lumen area reduction was observed in our model (Fig. 9A), while, according to [52], a compensatory enlargement of the vessel wall should preserve lumen area in the early stages of plaque formation. Although being able to capture some major aspects of the pathologic wall remodeling associated with atherosclerosis, as shown in Figs. 8, 9 and 10, the present ABM does not replicate the formation of a fibrotic cap. However, instead of fibrous tissue, it simulates a thickening of the intima (e.g. increased SMCs and ECM) between the lipid core and the lumen in advanced stages of the plaque, as in Fig. 10. In view of a future implementation of structural (i.e. mechanical) aspects, it might be important to distinguish the fibrotic cap, since it is a key factor in determining plaque stability/instability [3]. Finally, the ABM successfully captured wall response to different WSS stimuli (Fig. 10). Although endothelial cells were not included in the ABM, the contribution of the WSS on the endothelial dysfunction, subsequently triggering the process of plaque formation, was considered through the implementation of a diffusion equation for the inflammation. Similarly, in previous works aimed at investigating the phenomenon of in-stent restenosis [18,19], the endothelium was not explicitly modeled, but a method to estimate the nitric oxide was used, thus considering its influence in SMCs activity and the effect WSS exerts on it. The sensitivity analysis performed on the ABM provided insights in the working mechanisms of the model, by identifying the most influencing parameters, the interactions among the agent dynamics and the contribution of SMC, ECM and lipid dynamics in the lumen area change. Both the monoparametric and multi-parametric sensitivity analyses recognized α2 (i.e. weight of the effect of inflammation in cell/ECM dynamics) as the driving parameter. From a designing perspective, this is the most important finding, suggesting that a future calibration of α2 will reduce most of the epistemic uncertainty associated with the model, thus improving the accuracy of the results. Moreover, in a general view, knowing in advance which is the most important parameter to be calibrated allows planning experiments optimized for the specific purpose, thus avoiding a waste of time and resources. The reasons why α3 did not show any contribution probably lie in the considered range and in the fact that the term (1 + 3 ) only involves a minor portion of SMC/ECM agents, thus constituting a local factor that does not produce a net effect in the cell/ECM dynamics. As regards the lipid dynamics and its driving parameters, the performed sensitivity analysis pointed out some crucial aspects, previously unknown. First, the saturation of the lipid infiltration rate for α5>0.039 and α6>3.488 while keeping all the other parameter fixed, is due to the fact that, with trylip=10 chances, the probability is already enough to allow a lipid to enter at each time step. As a consequence, if only one parameter at time is varied, trylip is the one that mostly controls the rate of lipid accumulation in the intima. This parameter may represent the global endothelium permeability to lipids. Indeed, while the probability of a single endothelial site to allow a lipid to enter the wall is computed as expressed by Eq. 8, the number of sites at each time step potentially favorable for lipid entry constitutes a global measure that largely control the phenomenon. However, when the combined effects of the parameters were investigated, also the contribution of α6 was identified, confirming what previously stated and recognizing the potentialities of multi-parametric sensitivity analysis. Due to the high computational cost of the ABM (mean computation time of 25.87 hours for a 2months simulation) the LHS/PRCC sensitivity analysis was performed on a small sample size and results were obtained by running 100 ABM simulations. Consequently, only few correlations were identified as statistically significant. However, we decided to take into account also high PRCC associated with p≈0.06 because we attribute the inability to get more significance for those high correlations to the small sample size. Indeed, such correlations corresponded to parameters that were found to significantly affect that specific output in the mono-parametric sensitivity analysis. On the contrary, high p-values were associated with low PRCC values and results of the mono-parametric analysis revealing no relationship between said input and output. A low efficacy of the PRCC is, indeed, associated with non-monotonic input-output relationships [45]. To obtain more reliable correlation estimates between each output and input, it is necessary to hugely increase the sample size, implying large computational effort. For this purpose, the Matlab code of the ABM might be converted in C, which is thought to extremely reduce the computation time for each ABM simulation. This will also allow extending the sensitivity analysis to more input parameters that were not considered in the present work, namely those related to the diffusion equation for the inflammation and the threshold on the WSS condition. In particular, the WSS threshold of 1 Pa is a strong assumption which is thought to largely affect the output of the model. In fact, in case the threshold was lowered to 0.5 Pa, for example, plane 5 in Fig. 10, whose minimum WSS is 0.69 Pa, would remain in physiologic condition as plane 1, and plane 9 would develop a less severe plaque. Moreover, in a future perspective, the combination of sensitivity analysis and inverse problem solution proposed by Casarin et al. [53,54] can be a valuable tool to further narrow the range of optimal setting of the ABM model coefficients. The sensitivity analysis on the coupling period revealed that a shorter coupling time should be preferred until the ABM simulation mode for each plane stabilizes, namely until the WSS profile become clearly pathologic/physiologic (i.e. far from 1 Pa) or switching behaviors have not been detected for enough time. However, in this work only three cases of coupling period were investigated. The automation of the fully coupled CFD-ABM framework will reduce the user time consumption in the coupling processes, namely for the generation of the CFD model, the CFD simulation settings and the initialization of the subsequent ABM simulations. Once automated, a more extensive sensitivity analysis on the coupling period will be possible, as well as a sensitivity analysis on the number M of 2D cross-sectional planes, which was not addressed in the present work, although considered important to determine the effect of the spatial resolution on the results. Finally, the automation of the framework and the conversion of the Matlab code in C, will facilitate the calibration of the ABM parameters on patient-specific geometries and the future validation, which otherwise would require excessive computational efforts and time. Conclusions In this methodological work, we developed a multiscale CFD-ABM framework, able to capture the mutual interaction between hemodynamics and arterial wall remodeling in atherosclerosis. The framework successfully simulated plaque formation in areas affected by disturbed hemodynamics of an idealized SFA model and updated the fluid dynamics following plaque growth. Qualitatively, the ABM replicated the main morphological and compositional changes involved in atherosclerosis, generating a pathologic arterial wall configuration coherent with histological images. Quantitatively, the output of the model was associated with uncertainty, which was related to its stochasticity and the input parameters. Replicating the ABM simulations N=10 times keeping all the parameters fixed allowed having an estimation of the aleatory uncertainty while the combination of mono-parametric and multi-parametric sensitivity analyses provided an estimation of the output oscillation due to uncertainty in the input parameters. The sensitivity analysis of the ABM parameters revealed that the lumen area reduction, which is the most clinically relevant effect of plaque formation, was exclusively governed by the weight of the WSS-induced inflammation, represented by α2, which acts on the SMC proliferation and ECM production in the intima layer. As a consequence, α2 was responsible for most of the uncertainty of the model output. This finding suggests that the identification of the exact value for α2 will be a turning point towards the definition of a simplified, but reliable model. Other parameters were found to influence the process of formation of the lipid core, without affecting neither SMC/ECM dynamics, nor the lumen area change. These parameters, having a local effect, are less important to be calibrated, but may express the inter-variability of the lipid core size among individuals. In conclusion, the results of the sensitivity analysis lay the foundations for a future parameter calibration and model validation based on experimental and/or clinical data, which is required for a more systematic assessment of the reliability and usability of the present multiscale CFD-ABM framework of atherosclerosis.
2020-01-23T09:07:35.955Z
2020-01-18T00:00:00.000
{ "year": 2020, "sha1": "06f8f456cfb91c2a7f317d5a7b892dea064753c3", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/3629400/files/CBM_Corti_post-print.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ef3ce72fdeaa58e213656e21d001578d256674a4", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
237278167
pes2o/s2orc
v3-fos-license
Quantum Artificial Intelligence for the Science of Climate Change Climate change has become one of the biggest global problems increasingly compromising the Earth's habitability. Recent developments such as the extraordinary heat waves in California&Canada, and the devastating floods in Germany point to the role of climate change in the ever-increasing frequency of extreme weather. Numerical modelling of the weather and climate have seen tremendous improvements in the last five decades, yet stringent limitations remain to be overcome. Spatially and temporally localized forecasting is the need of the hour for effective adaptation measures towards minimizing the loss of life and property. Artificial Intelligence-based methods are demonstrating promising results in improving predictions, but are still limited by the availability of requisite hardware and software required to process the vast deluge of data at a scale of the planet Earth. Quantum computing is an emerging paradigm that has found potential applicability in several fields. In this opinion piece, we argue that new developments in Artificial Intelligence algorithms designed for quantum computers - also known as Quantum Artificial Intelligence (QAI) - may provide the key breakthroughs necessary to furthering the science of climate change. The resultant improvements in weather and climate forecasts are expected to cascade to numerous societal benefits. Introduction The Earth's mean temperature has risen steeply over the last few decades precipitating a broad spectrum of global-scale impacts such as glacier melt, sealevel rise and an increasing frequency of weather extremes. These changes have resulted from the rising atmospheric carbon pollution during the industrial era because of the use of fossil fuels. The Earth mean temperature today is about 1 degree Celsius (C) higher than pre-industrial times. Recent scientific advances increasingly suggest that exceeding 1.5 degrees C may cause the Earth system to lurch through a cascading set of 'tipping points' -states of no return -driving an irreversible shift to a hotter world. Climate change is global yet its manifestations and impacts will differ across the planet. Therefore, quantifying future changes at regional and local scales is critical for informed policy formulation. This, however, remains a significant challenge. We begin with a discussion on state-of-the-art science and technology on these questions and their current limitations. With this context, we come to the main theme of this article which is the potential of the emerging paradigm of 'quantum computing', and in particular Quantum Artificial Intelligence (QAI), in providing some of the breakthroughs necessary in climate science. The rest of the chapter is organized as follows. In Sect. 2, we discuss science of climate change and the role of artificial intelligence. Section 3 discusses quantum artificial intelligence for the science of climate change. Section 4 concludes the chapter and highlights possible future directions. Science of climate change and the role of artificial intelligence Climate models have become indispensable to studying changes in the Earth's climate, including its future response to anthropogenic forcing. Climate modelling involves solving sets of coupled partial differential equations over the globe. Physical components of the Earth system -the atmosphere, ocean, land, cryosphere and biosphere -and the interactions between them are represented in these models and executed on high performance supercomputers running at speeds of petaflops and beyond. Models operate by dividing the globe into grids of a specified size, defined by the model resolution. The dynamical equations are then solved to obtain output fields averaged over the size of the grid. Therefore, only physical processes operating at spatial scales larger than the grid size are explicitly resolved by the models based on partial differential equations; processes that operate at finer scales, such as clouds and deep convection, are represented by approximate empirical relationships called parameterizations. This presents at least two significant challenges: 1. While climate models have become increasingly comprehensive, grid sizes of even state-of-the-art models are no smaller than about 25 km, placing limits on their utility towards regional climate projections and thereby for targeted policymaking. 2. Physical processes organizing at sub-grid scales often critically shape regional climate. Therefore, errors in their parameterizations are known to be the source of significant uncertainties and biases in climate models. Additionally, numerous biophysical processes are not yet well understood due to the complex and nonlinear nature of the interactions between the oceans, atmosphere and land. Therefore, rapid advances are necessary to 'downscale' climate model projections to higher resolutions, improving parameterizations of sub-grid scale processes and quantifying as yet poorly understood non-linear feedbacks in the climate system. A significant bottleneck in improving model resolution is the rapid increase in the necessary computational infrastructure such as memory, processing power and storage. For perspective, an atmosphere-only weather model with deep convection explicitly resolved was recently run in an experimental mode with a 1km grid size. The simulation used 960 compute nodes on SUMMIT, one of the fastest supercomputers in the world with a peak performance of nearly 150 petaflops, yet achieved a throughput of only one simulated week per day in simulating a four month period. A full-scale climate model, including coupled ocean, land, biosphere and cryosphere modules, must cumulatively simulate 1000s of years to perform comprehensive climate change studies. Towards surmounting the challenge of this massive scaling up in computing power, there have recently been calls for a push towards "exascale computing" (computing at exaflop speeds) in climate research. While the technology may be within reach, practical problems abound in terms of how many centres will be able to afford the necessary hardware and the nearly GW scale power requirements of exascale computing that will require dedicated power plants to enable it. Similar bottlenecks exist for improving parameterizations of sub-grid scale processes. Satellite and ground based measurements have produced a deluge of observational data on key climate variables over the past few decades. However, these datasets are subject to several uncertainties such as data gaps, and errors arising during data acquisition, storage and transmission. The emerging challenge is to process and distil helpful information from this vast data deluge. Towards overcoming these challenges to improving climate projections, we discuss recent advances in Artificial Intelligence that have enabled new insights into climate system processes. These techniques, however, are also subject to their own limitations. It is in this context that we discuss how Quantum Artificial Intelligence may help overcome those limitations and advance both higher resolution climate model projections and reduce their biases. When machines learn decision-making or patterns from the data, they gain what is known as artificial intelligence (AI). Climate science has seen an explosion of datasets in the past three decades, particularly observational and simulation datasets. Artificial Intelligence (AI) has seen tremendous developments in the past decade and it is anticipated that its application to climate science will help improve the accuracy of future climate projections. Recent research has shown that the combination of computer vision and time-series models effectively models the dynamics of the Earth system ( [10]). It is anticipated that advances in this direction would enable artificial intelligence to simulate the physics of clouds and rainfall processes and reduce uncertainties in the present systems [27]. In addition to helping augment the representation of natural systems in climate models by using the now available high quality data, AI has also been proposed for climate change mitigation applications (6]) Other areas where AI is playing a leading role are the technologies of carbon capture, building information systems, improved transportation systems, and the efficient management of waste, to name a few [6]. There are however limitations to the present deep learning models, for example, their inability to differentiate between causation and correlation. Moreover, Moore's law is expected to end by about 2025 as it bumps up against fundamental physical limits such as quantum tunneling. With the increasing demands of deep learning and other software paradigms, alternate hardware advancements are becoming necessary [15]. Quantum Artificial Intelligence for the science of climate change Artificial intelligence algorithms suffer from two main problems: one is the availability of good quality data and the other is computational resources for processing big data at the scale of planet Earth. The impediments to the growth of AI based modelling can be understood from the way language models have developed in the past decade. In the early days of their success, developments were limited to computer vision, while natural language processing (NLP) lagged behind. Many researchers tried to use different algorithms for NLP problems but the only solution that broke ice was increasing the depth of the neural networks. Present day GPT, BERT and T5 models are evolved versions from that era. Maximizing gains from the rapid advances in artificial intelligence algorithms requires that they be complemented by hardware developments; quantum computing is an emerging field in this regard. Quantum computers (QC) represent a conceptually different paradigm of information processing based on the laws of quantum physics [28]. The fundamental unit of information for a conventional / classical computer is the bit, which can exist in one of two states, usually denoted as 0 and 1. The fundamental unit of information for a quantum computer on the other hand is the "qubit", a two-level quantum system that can exist as a superposition of the 0 and 1 states, interpreted as being simultaneously in both states although with different probabilities. What distinguishes quantum from classical information processing is that multiple qubits can be prepared in states sharing strong "nonclassical" interactions called "entanglement" that simultaneously sample a much wider informational space than the same number of bits, thereby enabling, in principle, massively parallel computation. This makes quantum computers far more efficiently scalable than their classical counterparts for certain classes of problems. One trend of quantum computing is the race to demonstrate at least one problem that remains intractable to classical computers, but which can be practically solved by a quantum computer. Google coined this feat "quantum supremacy", and claimed, not without controversy, to achieve it with its 540qubit Sycamore chip [18]. A research team in China introduced Jiuzhang, a new lightbased special-purpose quantum computer prototype, to demonstrate quantum advantage in 2020 [19]. The University of Science and Technology of China has successfully designed a 66-qubit programmable superconducting quantum processor, named ZuChongzhi [20]. IBM plans to have a practical quantum chip containing in excess of one thousand qubits by 2023 [21]. Artificial intelligence on quantum computers is known as quantum artificial intelligence and holds the promise of providing major breakthroughs in furthering the achievements of deep learning. NASA has Quantum Artificial Intelligence Laboratory (QuAIL) which aims to explore the opportunities where quantum computing and algorithms address machine learning problems arising in NASA's missions [24]. The JD AI research center announced that they have a 15-year research plan for quantum machine learning. Baidu's open-source machine learning framework Paddle has a subproject called paddle quantum, which provides libraries for building quantum neural networks [25]. However, for practical purposes, the integration of AI and quantum computing is still in its infant stage. The use of quantum neural networks is developing at a fast pace in the research labs, however, pragmatically useful integration is in its infant stages [22,23]. The current challenges to industrial-scale QAI include how to prepare quantum datasets, how to design quantum machine learning algorithms, how to combine quantum and classical computations and identifying potential quantum advantage in learning tasks [26]. In the past 5 years, algorithms using quantum computing for neural networks have been developed ( [3], [4]). Just as the open-source TensorFlow, PyTorch and other deep learning libraries stimulated the use of deep learning for various applications, we may anticipate that software such as TensorFlowQ (TFQ), QuantumFlow and others, already in development will stimulate advances in QAI Complex problems in Earth system science: Potential for QAI Quantum artificial intelligence can be used to learn intelligent models of earth system science bringing new insights into the science of climate change. Quantum AI (QAI) can play an essential role in designing climate change strategies based on improved, high-resolution scientific knowledge powered by QAI. Recent studies (for example, [13]) have attempted to develop physics schemes based on deep learning. However, these are largely proof-of-principle studies in nascent stages. Challenges such as the spherical nature of the data over Earth, complex and non-linear spatio-temporal dynamics and others exist in AI for improved climate models. Various techniques such as cubed spheres and tangent planes have been proposed to address the spatial errors arising out of sphericity. QAI can further develop advanced physical schemes using AI by incorporating high-resolution datasets, more extended training, and hyper parameter optimization. A necessary condition for quantum speedup of classical AI is that the task in question can be parallelized for training. Present libraries such as TensorFlow and PyTorch offer both data and model parallelism capabilities. They have also been released for quantum computers and need to be further developed for industrial scale quantum computers of the future. Figure 1 shows Case study on the use of QAI for climate science We demonstrate an example of the application of Quantum Artificial Intelligence for land-use land cover classification on UC Merced dataset. The dataset is first transformed to Quantum data using Pennylane library and the training is performed. The code can be found on the GitHub repository for this article at https://github.com/manmeet3591/qai_science_of_climate_change. Figure 2: A case study on using Quantum artificial intelligence for land-use landcover classification Figure 2 shows the working of the case study. Initially, satellite data is transformed as Quantum data using Pennylane. Then, we apply the deep convolutional classification algorithm for land-use land-cover classification. The output classes consist of forests, agricultural fields etc Conclusions and Future Work Simulation studies are used to understand the science of climate change and are computationally expensive tools to understand the role of various forcings on the climate system. For example, recently, a study in the journal Science Advances showed how volcanic eruptions could force the coupling of El Nino Southern Oscillation and the South Asian Monsoon systems. Works of such kind are critical in advancing the understanding of the climate system and its response to various forcings. However, they are computationally demanding and are time-consuming to complete. Large ensemble climate simulations is an area that requires further work using Quantum computing and Quantum Artificial Intelligence. Quantum machine learning can play an important role, especially in pattern recognition for weather and climate science with problems that will benefit the most from quantum speedup being those that are inherently parallelizable. However, various challenges present themselves in designing and operating useful quantum computers. State-of-the-art implementations of quantum computers today can control and manipulate on the order of 100 qubits whereas it is estimated that any real-world applications where quantum computers can reliably outperform classical computers would require on the order of a million qubits. This presents a formidable technological challenge. Additionally, entanglement -the heart of quantum computing -is a fragile resource prone to being destroyed with even the slightest disturbance (called "decoherence"). Therefore, operational quantum computers may be several years into the future. Yet, given their potential to affect a genuine paradigm shift, much effort has been invested in exploring its application to various fields and the focus of the present article concerns its possible climate science applications. In summary, quantum artificial intelligence is projected to be a powerful technology of the future. Developments in the field of quantum artificial intelligence include the development of both computer vision and sequence algorithms capable of being implemented on large quantum computers. All these advancements would be driven by one factor, i.e. the development of high performance quantum computing hardware. Software Availability We have released the code for demonstrating the application of QAI on landuse land cover classification dataset from UC Merced dataset at: https://github.com/manmeet3591/qai_science_of_climate_change
2021-08-25T01:15:45.892Z
2021-07-28T00:00:00.000
{ "year": 2021, "sha1": "32ceb6ff13e6535b07a65464f93df093cd63567e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "32ceb6ff13e6535b07a65464f93df093cd63567e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
169166558
pes2o/s2orc
v3-fos-license
The Analysis of Company Size, Complexity of Operation, Profitability, Solvency and Audit Firm Size toward Timeliness of Financial Statement Reporting for Company listed in LQ45 Index in Indonesia Stock Exchange (2012 – 2014) Companies are required to submit their annual report timely after the end of fiscal year to support stakeholder’s need of information. Financial statements would have benefits if delivered accurately and timely to the users for decision making. This research is aimed to identify the effect of company size, complexity of operation, profitability, solvency, and audit firm size toward the timeliness of financial statements reporting in companies that are listed in LQ45 index from 2012 to 2014 either simultaneously and partially. The research involves 69 samples, which consist of 3 years data of 23 companies that are consistently listed in LQ45 index from 2012 to 2014. The research found that complexity of operation, profitability, and audit firm size are statistically significant toward the timeliness of financial statements reporting. While company size and profitability are not statistically significant toward the timeliness of financial statements reporting. The F-test result revealed that one or more independent variables have significant influence toward the timeliness of financial statements reporting. Then, the R 2 analysis showed that the regression model is able to describe timeliness of financial statements reporting by 26.3%. The rest 73.7% is explained by other factors apart from this research. INTRODUCTION The timeliness of financial reporting is needed by the investors to do analysis on the capital that have been invested or will be invested in a company (Al Daoud et al., 2014). It means that the timeliness of financial statements reporting is a factor attracting the investors to invest in a company. Timely reporting also contributes to the prompt and efficient performance of stock markets and to mitigate (or reduce the level of) insider trading, leaks, and rumors in the market (Owusu-Ansah, 2000). The company that publish their financial statements faster than others will be seen first by the investors, because the investors want to know the financial information of a company and the reliable source of information available to them. The delay of financial statement reporting will cause a negative reaction from investors. It means that investors do not need the information anymore because their decisions have already been made (Bonson & Borrero, 2011). The timeliness of financial reporting in Indonesia has been regulated by Badan Pengawas Pasar Modal dan Lembaga Keuangan (Bapepam & LK). "Laporan keuangan tahunan wajib disertai dengan laporan Akuntan dalam rangka audit atas laporan keuangan. Laporan keuangan tahunan wajib disampaikan kepada Bapepam dan LK dan diumumkan kepada masyarakat paling lambat pada akhir bulan ketiga setelah tanggal laporan keuangan tahunan" (Keputusan Ketua Bapepam & LK No.: Kep-346/BL/2011). It means that public company have to submit and publish the financial statement no later than the end of third months after financial statement date. The financial statement also have to be accompanied by independent auditor's report/opinion. Although there is a sanction to the companies who were late in reporting the financial statement, but in fact, still there are companies that were late in reporting the financial statement (Kep-307/BEJ/07-2004 No. I-H). In 2012 there are total of 52 companies, in 2013 there are total of 49 companies, and in 2014 there are total of 52 companies that late in reporting their financial statement. In order to help investors in generating investment decision in capital market, Indonesia capital market launched LQ45 Index. LQ45 Index is a market capitalization-weighted index that captures the performance of 45 most liquid companies listed on the Indonesia Stock Exchange (IDX). The index comprises of 45 stocks choice based on their liquidity, market capitalization and other criteria. The LQ45 Index covers at least 70% of the stock market capitalization and transaction values in the capital market. Hence, LQ45 index is able to reflect the performance of companies' listed shares in Indonesia Stock Exchange. In this research, author is interested in finding out the factors affecting the timeliness of financial statement reporting of the company listed in IDX especially the company in LQ45 index. Using the previous research as the reference in conducting the research, author will make a research to the factors such as company size, complexity, profitability, solvency, and audit firm that affecting financial statement reporting timeliness. This research will discuss about the timeliness of financial statement reporting in the company listed in LQ45 index of Indonesia Stock Exchange. The companies that will be tested only those that consistently listed in LQ45 index for 3 years straight, from 2012, 2013, and 2014. The factor that affecting the timeliness will focus on the company size, complexity of company, profitability, solvency, and size of audit firm. LITERATURE REVIEW One of the attributes that can be connected with the timeliness of financial statements 21 reporting is the company size. The size of the company can be assessed from several aspects. It can be based on the total value of assets, total sales, market capitalization, the amount of labor, and so on. The greater the value of these items, the greater the size of the company. Large companies shows that there is a lot of information that is contained in the company. Large companies often argue to be faster in submit their financial statements for several reasons. First, large companies have more resources, more accounting staff, sophisticated information systems, and have a good internal control system. Second, large companies receive more supervision from investors and regulators. In detail, large companies often followed by a large number of analysts who always expect timely information to strengthen and revise their expectations. Large companies are under pressure to announce its financial report on time to avoid any speculation in their companies stock trading (Owusu-Ansah, 2000). Other than that large companies will also be highlighted by the public than smaller companies. Therefore, large companies will attempt to submit their financial statements timely to maintain its image in the public (Dyer &Mc Hugh, 1975). The level of complexity of operation in a company depends on the number and location of its operating units (branches) as well as the diversification of product lines and markets. These things is more likely to affect the time required auditors to complete the audit work. So it also affects the timeliness of the company's financial statements reporting to the public. That relationship is also supported by research Ashton et.al. (1987) in Owusu-Ansah (2000) who found that there is a positive correlation between the complexities of company's operating to the audit delay. Furthermore, research conducted by Owusu-Ansah (2000) found empirical evidence that the level of complexity of the operation of a company has a relationship that will affect the company's timeliness in submitting financial statements to the public. Financial performance information, especially the profitability is required to assess potential changes in the economic resources that may be controlled in the future. Profitability is also used as an indicator to determine the successfulness of the company's performance to generate profit. The higher the profitability of a company then financial statements produced by the company contain good news. Companies that have good news in their report is likely to be more timely in publishing the financial statements. On the other hand, companies that have a low level of profitability then the financial statements will contain the bad news. Companies that have bad news in their report will likely not timely in publishing the financial statements. This conform with the previous research conducted by several researchers. Based on research conducted by Dyer and McHugh (1975), found that companies that earn profits tend to be timely in submitting their financial statements, and vice versa if loss. While Carslaw and Kaplan (1991) found that companies experiencing losses ask auditors to schedule the audit slower than it should, as the result becomes late in submitting their financial statements. Solvency is the ability of the company to settle all liabilities. According Carslaw and Kaplan (1991), the relative proportion of debt to total assets indicates a company's financial condition. Greater proportion of debt to total assets increases the likelihood of loss and may increase caution auditor on the financial statements to be audited. This is due to the high proportion of debt will increase the risk of losses. Therefore, companies that have unhealthy financial condition are usually prone to mismanagement and fraud. The high debt to equity ratio reflects the company's high financial risk. The company's high financial risks indicates that the company is experiencing financial distress due to high liabilities. Financial distress of company is bad news which would affect the company in the public view. The management will tend to delay the submission of financial statements that contained bad news because the time available will be used to suppress the debt to equity ratio as low as possible. A financial statement or information of company's performance should be presented accurately and reliable. Hence, the company then used the services of a public accounting firm to carry out audit work on the financial statements of the company. The size of the public accounting firm is differentiated into public accounting firms that enter the top four, in this case the Big Four and non Big Four public accounting firms, where the big four public accounting firm tend to more quickly complete the audit task they received. Big Four public accounting firms generally have greater resources so that it can conduct audits more quickly and efficiently. This proves the opinion that companies audited by a Big Four public accounting firm tend to more quickly complete the audit when compared with companies audited by a non Big Four public accounting firm. RESEARCH METHOD The research method used is a quantitative method. In this research, the dependent variable is timeliness of financial statement reporting. Independent variable is variable that explain or affect the function of other variable. There are five independent variables used, which are size of company, complexity of operation, profitability (return on assets), solvency (debt to equity), and audit firm size. This research will examine the effect of independent variable to dependent variable using multiple regression model. Operational definition is an indicator of how the variables are measured. To simplify the analysis, each variable will be defined operationally. Timeliness Timeliness is the span of the announcement of audited annual financial statements to public which is the length of days required to announce the annual financial statements which have been audited to the public, since the company's fiscal year closing date (December 31) until the date of submission to Otoritas Jasa keuangan (OJK) (no later than March 31 next year). This dependent variable is measured based on the date of submission of the audited annual financial statements. Company Size The size of the company can be expressed on the total value of assets, total sales, market capitalization, and so on. The greater the value of the items, the greater the size of the company. In this research, the size of the company is proxied by using Ln market capitalization. Market capitalization is the aggregate valuation of the company based on its current share price and the total number of outstanding share. The use of natural logarithm (Ln) in this research are intended to reduce excessive fluctuations in the data. If we use market capitalization value originally from company data, the value of variable will be huge in amount maybe billions or even trillions. By using natural logs, the value of billions and even trillions can be simplified, without changing the proportions of the value of the actual origin. Complexity of Operation It is expected that the degree of complexity of a company's operations will influence how timely the company reports to the public. The degree of complexity of a company's operations which depends on the number and locations of its operating units (branches), diversification of its product lines and market. These complexity is more likely to affect the time auditor takes to complete his/her audit assignment, and hence, the time by which the company will eventually release its financial report to the public. Thus, a positive relationship between operational complexity and audit delay is expected. The complexity of a company's operations is captured by the number of subsidiaries in which a sample company operates. Profitability Profitability is an indicator of the success of the company (management effectiveness) in generating profits. The higher the company's ability to generate profits, the higher the level of effectiveness of the company's management. Profitability can be measured using the net profit margin by: Solvency Solvency is the company's ability to pay off its debts both long-term debt and short-term debt. In this research, measured using a solvency ratio of total debt to total assets, or the socalled debt to total assets ratio. Audit Firm Size In this research, the size of audit firm is measured by looking at which audit firm audits the company's financial statements. There are two categories of audit firms, Big four and non Big four. The Big Four are the four largest international public accounting firm. Audit firms that categorized as Big Four are PwC, Deloitte, EY, and KPMG. Audit firms size in this research is measured by using a dummy variable, for companies that use the services of the firm that partnered with Big Four audit firms coded 1 and for companies that use the services of the firm that is not partnered with Big Four audit firms coded 0. Sampling Design The sample represents the population that has similar characteristics and can be considered. In this research, the sample is taken by purposive sampling method; the sample is selected by applying criterias. The sample is taken from companies in LQ45 index in Indonesian Stock Exchange (IDX) for period 2012 -2014. The criterias applied in this research are: 1. Companies that still actively listed in the IDX specifically in LQ45 Index companies for the period of 2012 -2014. RESEARCH METHOD Data analysis method that is applied in this research is a multiple regression analysis. Multiple regressions model will be used in this research to analyze the effect of independent 24 JAAF (Journal of Applied Accounting and Finance) Volume 2, Number 1, 2018, 18-35 variable to the dependent variable. The model is: Where: Time i,t = timeliness of financial statement reporting in company i in year t Size i,t = company size i in year t Comp i,t = complexity of Operation in company i in year t Prof i,t = profitability in company i in year t Solv i,t = solvency in company i in year t Audit i,t = audit firm size in company i in year t Β0 = constanta Β1-β5 = regression coefficients e = error R-test Correlation coefficient test was used to determine the relationship between two independent and dependent variables, whether perfect, strong, moderate, weak, or do not have a relationship (Ghozali, 2013). This applies to interpret the correlation value zero means no relationship at all or close to 0 means that the relationship between the variables is weak, and said to be strong if R is close to 1 (Ghozali, 2013). Coefficient of Determination (R 2 test) Coefficient of determination (R 2 ) is used to measure the variance of the dependent variable about its mean that is explained by the independent, or predictor, variables. The coefficient of determination is the square of the correlation (r) between predicted y scores and actual y scores. Adjusted R 2 always takes on a value between 0 and 1. With linear regression, the coefficient of determination is also equal to the square of the correlation between x and y scores. The closer adjusted R 2 is to 1, the better the estimated regression equation fits or explains the relationship between X and Y. An R 2 of 0 means that the dependent variable cannot be predicted from the independent variable. An R 2 of 1 means the dependent variable can be predicted without error from the independent variable. An R 2 between 0 and 1 indicates the extent to which the dependent variable is predictable. F-Test F-test is used to determine if the independent variables affecting the dependent variable simultaneously or not. Degree of confidence used is 5%. If the significant test is greater than 0.05, then the independent variables do not significantly affect the dependent variable at all. On the other hand, if the significant test is less than 0.05, then it can be concluded that at least one of the independent variables do affect the dependent variable significantly in statistic. t-Test The Analysis of Company Size 25 t-test is used to determine if the independent variables partially affecting the dependent variable significantly in statistic. t-test can be done by looking the t-value and result of each level of significant. Degree of confidence used is 5%. The variable does not statistically have significant impact, if the significant test is greater than 0.05. Otherwise, if the significant test is less than 0.05, then the variable does affect the dependent variable statistically significant. Followings are statistic notation of each hypothesis that will be tested through t-test. Descriptive Statistic Analysis Descriptive statistic is related to collection and rankings of data which describe the characteristics of sample used in this research. This analysis is to describe the characteristics of sample using extreme values (minimum and maximum value), mean (average), and standard deviation. Based on the data processed using SPSS which includes timeliness, company size, complexity of operation, profitability, and solvency it will be known the minimum value, maximum value, mean, and standard deviation of each variable. While the audit firm size variable is not included in the calculation of descriptive statistics because audit firm size is a variable that has nominal scale. Nominal scale is a scale of measurement categories or groups (Ghozali, 2013). This figure only serves as a mere category labels without any intrinsic value, therefore it is not appropriate to calculate the value of the average (mean) and standard deviation of the variable (Ghozali, 2013). Table 1, minimum value of timeliness variable is 23 and the maximum value is 96. It means that the shortest time for company to submit the audited financial statement is 23 days after the year end. Then the longest time for company to submit the financial statements is 96 days after the year end or not timely reported the financial statements. After that the average of Timeliness is 69.2174 and standard deviation 17.58414. The descriptive statistics also shows that the companies were timely reporting the financial statements with average 69 days. Based on descriptive test in the Table 1 above, it can be known that minimum value of company size in the amount of 29.77; and maximum value 33.41. The result shows that the magnitude of market capitalization natural logarithm (ln) in this research range from 29.77 until 33.41 with average 31.8183. According to Table 1, minimum value of this variable is 1 and the maximum value is 44. It means that at least the companies in this research have 1 subsidiary and the most subsidiaries that the company have is 44. Then the average of subsidiaries is 10.6232. According to Table 1, minimum value of profitability in this research is 0.03 and the maximum value is 0.86. It means that the least profitable sample is able to convert 3% of its revenue into profit. Meanwhile, the most profitable sample is able to convert 86% of its revenue into profit. The result shows that the magnitude of net profit margin in this research range from 0.03 until 0.86 with average 0.5092. According to Table 1, minimum value of solvency in this research is 0.14 and the maximum value is 0.88. It means that the least solvent sample finance 14% of assets by using debt. Meanwhile the most solvent sample finance 88% of assets by using debt. The result shows that the magnitude of debt do total assets in this research range from 0.15 until 0.88 with average 0.5092. For a general overview of the sample with the audit firm size variable can be seen in the following frequency table: Table 2 shows the frequency of samples using the service of Big Four or non Big Four audit firm for the period of 2012 until 2014. Based on the frequency table there are 9 samples (13%) using the service using from non Big Four audit firm and there are 60 samples (87%) using the service from Big Four audit firm. Result of Classical Assumption Test This research is intend to analyze the influence of company size, complexity of operation, profitability, solvency, and audit firm size toward audit delay in LQ45 index Indonesia Stock Exchange for period 2012-2014. Before doing regression analysis, the researcher do some classical assumption test. Classical assumption test is the main requirement in regression equation, so it must be tested against 4 classical assumption as In this research, researcher used Kolmogorov-Smirnov to have more accurate and objective normality test. The researcher use this test instead of histogram graph because by only looking at the histogram, it can mislead the judgment, particularly for small size of sample (Ghozali,2013). Table 3, the asymptotic significance is 0.383. Asymptotic significance (2-tailed) ≥ alpha (0.05), hence the data is normally distributed. The result of normality test also is completed with p-p plot graph that is presented below: Figure 1. Probability Plot of Normality Test Source: Data processed Based on Figure 1, the plot shows that the residual data is distributed normally since the dots pattern is following the diagonal line, which represents the normal distribution. Thus, it is shown that the residual value is distributed normally and independently. Multicollinearity refers to a condition of collinearity between independent variables. Multicollinearity is a test to know if there is any collineaity or resemblance between independent variables, usually involves more than two independent variables. Multicollinearity can be detected by looking at the value of the tolerance and variance inflation factor (VIF). When the tolerance value is more than 0.10 (>0.10) or VIF is less than 10 (<10), it can be concluded there is no multicollinearity between the independent variables in the regression. The result of multicollinearity is shown in table below: The table above shows that all independent variable have tolerance value more than 0.10 (>0.10) and VIF less than 10 (<10). It can be concluded that there is no multicollinearity in this research regression linier model. In this research to detect the presence or absence of heteroscedasticity is by looking at chart patterns generated from data processing using SPSS. The scatterplot graph below is used to analyze whether there is heteroscedasticity or homoscedasticity by observing the spread of dots. Scatterplot graph shows that the dots randomly spread and spread both above and below the number 0 on the Y axis, thus it can be concluded that there is no heteroscedasticity in the regression model. The existence of dots that spread away from the other dots due to the observation data is very different from any other observation data. To be more accurate, then Glejser test can be performed. This method regress absolute residual value to independent variable. There are two parameter to determine whether the heteroscedasticity exist with Glejser Test. The result of Glejser test is as follows: Table 5, it shows that none of the independent variable statistically significant influence Absolute Residual Value (ABS_RES) dependent variable. The Sig value of all independent variable is more than 0.05 (sig > 0.05) which are 0.746; 0.065; 0.364; 0.408; and 0.196. Therefore, it can be concluded that the regression model does not contain any heteroscedasticity. The researcher used Breusch-Godfrey test to detect autocorrelation test. Autocorrelation test aims to test whether the linear regression model in the development of a correlation between the disturbances error in period t with disturbances error in period t-1 (Ghozali, 2013). If there is a correlation, there may be a problem of autocorrelation. Autocorrelation arises because sequential observations over time are related to each other. Good regression model is free from autocorrelation. The RES_2 value shows whether in a regression model occurs autocorrelation, with criteria if Sig RES_2 ≥ alpha (0.05), hence there is no autocorrelation or if Sig RES_2 ≥ alpha (0.05), hence there is autocorrelation. The result of Breusch-Godfrey test is as follows: Table 6, the significance value of RES_2 is 0.113 which is bigger than alpha (α=0,05). The significant value shows that there is no autocorrelation, then the data in this research is good to use. Result of R-Test Below is the result of correlation coefficient analysis (R-Test): Table 7, the value of R is 0.513, which is greater than 0.5. The result shows that the relationship between independent variable and dependent variable is strong because it is between 0.5 and 1. Coefficient of Determination (R 2 Test) The coefficient of determination (denoted by R 2 ) is a key output of regression analysis. It is interpreted as the proportion of the variance in the dependent variable that is predictable from the independent variable. The coefficient of determination is the square of the correlation (r) between predicted y scores and actual y scores. Table 8, the value of R Square is 0.263, which means that company size, complexity of operation, profitability, solvency and audit firm size are able to describe the timeliness of financial statements reporting by 26.3%. The rest 73.7% is determined by variables other than all variables used in this research. Result of F-Test To see whether the regression model is a good model, and whether company size, complexity of operation, profitability, solvency, and audit firm size together influence timeliness of financial statements reporting, F test is conducted. F test is done to show the The Analysis of Company Size 31 value of probability or significance in ANOVA that represents the appropriateness of the model of regression. The amount of the probability value is considered good if it is less than 0.05. Below is presented the result of F test: Table 9 above, Fstatistic value is 4.507 with significance (p value) of 0.001. The significance value shows a result which is lower than significance level of 0.05 (0.001 < 0.05). With 5% significance level, it shows that the F-table is 2.360. In comparison, the Fstatistic value of 5.507 is higher than the F-table value of 2.360 (4.507>2.360). From those results, it can be conclude that independent variables simultaneously contribute significant effect towards dependent variable or at least one of independent variables influences the dependent variable. Result of t-Test The hypotheses testing are performed through t-test. t-test is performed in order to analyze the extent of each independent variable's influence toward the dependent variable, which is the timeliness of financial statements reporting. In this research, the t-table (two tailed) used is 1.998, which resulted from the degree of freedom of 63 (resulted from 69 -5 -1) and α = 5%. The result of the t-test is shown in Table 10 below: The absolute value of timeliness (constant) is 51.219 when SIZE, COMP, PROF, SOLV, and AUDIT equal to zero (0). SIZE coefficient is 1.670 means every 1 point increase of company size, the length of time needed to report financial statement will increase by 1.670 days. COMP coefficient is -0.438 means every 1 point increase of complexity of operation, the length of time needed to report financial statement will decrease by 0.438 days. PROF coefficient is -38.022 means every 1 point increase of profitability, the length of time needed to report financial statement will decrease by 38.022 days. SOLV coefficient is -19.455 means every 1 point increase of solvency, the length of time needed to report financial statement will decrease by 19.455 days. AUDIT coefficient is -13.759 means every 1 point increase of solvency, the length of time needed to report financial statement will decrease by 13.759 days. Meanwhile there are three variables that have significant influence on timeliness of financial statement reporting which are complexity of operation (COMP), profitability (PROF), and audit firm size (AUDIT). Below is presented the result of hypothesis testing. SIZE t statistic value is lower than t table (0.660 < 1.998) and the significance level is greater than 0.05 (0.511 > 0.05). It means that there is no significant influence of SIZE toward timeliness of financial statements reporting for sample of LQ45 Index companies period 2012 -2014. In this research, company size is measured by using market capitalization. Market capitalization is the aggregate valuation of the company based on its current share price and the total number of outstanding share. The share price of companies from different industry is vary because they have different market share. The sample taken in this research is coming from heterogeneous industries which resulting in discrepancy when comparing the company size by using market capitalization. This happened because there is disproportionate in market share of companies from different industry. Thus, market capitalization could not become the appropriate measurement to evaluate company size in this research and make the result has no significant influence. COMP t statistic value is lower than t table (-2.023 < -1.998) and the significance level is lower than 0.05 (0.047 < 0.05). It means that there is significant influence of COMP on timeliness of financial statements reporting for sample of LQ45 Index companies period 2012 -2014. This results support the results of research conducted by Owusu-Ansah (2000) which states that the complexity of the company's operations affect the timeliness of financial reporting. The level of complexity of the company's operations are based on the number of subsidiaries that a company has. This tend to influence the time of the auditor to complete the audit task, therefore affecting the timeliness of financial reporting by companies. PROF t statistic value is lower than t table (-2.688 < -1.998) and the significance level is lower than 0.05 (0.009 < 0.05). It means that there is significant influence of PROF on timeliness of financial statements reporting for sample of LQ45 Index companies period 2012 -2014. This results support the results of research conducted by Dyer and McHugh (1975) which found that companies that earn profits tend to be timely in submitting their financial statements, and vice versa if loss. The higher the profitability of a company then financial statements produced by the company contain good news. Companies that have good news in their report is likely to be more timely in publishing the financial statements. On the other hand, companies that have a low level of profitability then the financial statements will contain the bad news. Companies that have bad news in their report will likely not timely in publishing the financial statements. SOLV t statistic value is greater than t table (-1.858 < -1.998) and the significance level is greater than 0.05 (0.068 > 0.05). It means that there is no significant effect of SOLV on timeliness of financial statements reporting for sample of LQ45 Index companies period 2012 -2014. In this research, it is found that half of the sample have solvency value above the average of all sample tested. Even though they have high solvency rate, they were still reporting the financial statement timely. It indicates that solvency does not have effect on timeliness of financial statement reporting for companies listed in LQ45 Index. Because of the company listed in LQ45 index is being the attention of investors, therefore they are trying to fulfill the needs of timely reported financial statement for investors. This finding support the result of Owusu-Ansah (2000) and Amari and Jarboui (2013) that showed insignificant influence of solvency toward timeliness of financial statement reporting. AUDIT t statistic value is lower than t table (-2.039 < -1.998) and the significance level is lower than 0.05 (0.046 < 0.05). It means that there is significant influence of PROF on timeliness of financial statements reporting for sample of LQ45 Index companies period 2012 -2014. This results support the results of research conducted by Owusu-Ansah and Leventis (2006) which found that companies using the service of big audit firm tend to release the financial statement faster. The size of the public accounting firm is differentiated into public accounting firms that enter the top four, in this case the Big Four and non Big Four public accounting firms, where the big four public accounting firm tend to more quickly complete the audit task they receive. Big Four public accounting firms generally have greater resources so that it can conduct audits more quickly and efficiently. Out of company size, complexity of operation, profitability, solvency, and audit firm size used as independent variable in this research; only complexity of operation, profitability and audit firm size that shows significant result toward timeliness of financial statement reporting. The independent variable explained 26.3% of dependent variable. Meanwhile, company size and solvency cannot be a good predictor for timeliness of financial statement reporting. CONCLUSION Based on the analysis and result of the tests in the previous chapters, the conclusion of the influence of company size, complexity of operation, profitability, solvency, and audit firm size toward the timeliness of financial statements reporting for the companies listed in LQ45 index in Indonesia Stock Exchange period 2012 -2014 is summarized as follows: 1. Hypothesis testing results indicate that Company size does not have significant influence toward the timeliness of financial statements reporting. This is shown by the significance value more than 0.05, which is 0.511. 2. With the significance value less than 0.05, which is 0.047, the Hypothesis testing results indicate that Complexity of operation has significant influence toward the timeliness of financial statements reporting. 3. Hypothesis testing results indicate that Profitability has significant influence toward the timeliness of financial statements reporting, shown by the significance value less than 0.05, which is 0.009. 4. The significance value more than 0.05, which is 0.068, means the Hypothesis testing results indicate that Solvency does not have significant influence toward the timeliness of financial statements reporting. 5. Shown by the significance value less than 0.05, which is 0.046, the Hypothesis testing results indicate that Audit firm size has significant influence toward the timeliness of financial statements reporting. 6. Company size, complexity of operation, profitability, solvency, and audit firm size simultaneously contribute significant influence toward the timeliness of financial statements reporting. Independent variables are able to describe the timeliness of financial statements reporting up to 26.3%. The rest 73.7% is determined by variables other than all variables used in this research. However, this research has limitations. Results indicate little influence of the independent variables in influencing the dependent variable, which only amounted to 26.3% and the remaining 73.7% is influenced by other factors that are not included in the model, so there are many variables that affect the dependent variables, but are not included in this model. This research is limited to the company that are included in the LQ45 Index listed in Indonesia Stock Exchange period 2012 -2014 that consist of heterogeneous industry. This make incompatibility of market capitalization to measure the company size because there is disproportionate in market share of companies from different industry. In the future, researchers need to add other fundamental factors as independent variables, because it is very possible some fundamental factors that are not included in this study has a strong influence on share returns. Such as, Return on Asset (ROA), Debt to Equity ratio (DER), Total Assets, and many more. In the future, researchers need to add more research period. Thus, the result obtained later hopefully gives better explanation and show a better picture about the real condition of research subject. Also to provide better analysis for other interested users of the research. A ten years data or should prove adequate. In the future, researchers need to observing companies from specific type of industry to get more accurate picture of the results toward a specific industry. Such as, manufacturing, mining, service industry, and others.
2019-05-30T23:44:52.186Z
2018-05-17T00:00:00.000
{ "year": 2018, "sha1": "2e8b861bda24a9ec39f10fcf8ef3839b8fcaa1c4", "oa_license": null, "oa_url": "http://e-journal.president.ac.id/presunivojs/index.php/JAAF/article/download/328/240", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48f39a27494a21290831e76a0bbe777db767a514", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
14317495
pes2o/s2orc
v3-fos-license
The Associations of Single Nucleotide Polymorphisms in miR-146a, miR-196a and miR-499 with Breast Cancer Susceptibility Background Previous studies have investigated the association between single nucleotide polymorphisms (SNPs) located in microRNAs (miRNAs) and breast cancer susceptibility; however, because of their limited statistical power, many discrepancies are revealed in these studies. The meta-analysis presented here aimed to identify and characterize the roles of miRNA SNPs in breast cancer risk, and evaluate the associations of polymorphisms in miR-146a rs2910164, miR-196a rs11614913 and miR-499 rs3746444 with breast cancer susceptibility, respectively. Methodology/Principal Findings The PubMed and Embases databases were searched updated to 31st December, 2012. The complete data of polymorphisms in miR-146a rs2910164, miR-196a rs11614913 and miR-499 rs3746444 from case-control studies for breast cancer were analyzed by odds ratios (ORs) with 95% confidence intervals (CIs) to reveal the associations of SNPs in miRNAs with breast cancer susceptibility. Totally, six studies for rs2910164 in miR-146a, involving 4225 cases and 4469 controls; eight studies for rs11614913 in miR-196a, involving 4110 cases and 5100 controls; and three studies of rs3746444 in miR-499, involving 2588 cases and 3260 controls, were investigated in the meta-analysis. The rs11614913 (TT+CT) genotype of miR-196a2 was revealed to be associated with a decreased breast cancer susceptibility compared with the CC genotypes (OR = 0.906, 95% CI: 0.825–0.995, P = 0.039); however, no significant associations were observed between rs2910164 in miR-146a (or rs3746444 in miR-499) and breast cancer susceptibility. Conclusions This meta-analysis demonstrates the compelling evidence that the rs11614913 CC genotype in miR-196a2 increases breast cancer risk, which provides useful information for the early diagnosis and prevention of breast cancer. Introduction MicroRNAs (miRNAs) are non-coding RNA molecules that can act as tumor suppressor genes or oncogenes [1]. There are more than 1000 miRNA genes in the human genome [2][3][4], which regulate the translation or degradation of human messenger RNA (mRNA) by sequence complementarity [5][6][7]. MiRNAs regulate approximately 30% of human genes [8]. The genetic variants of a miRNA may affect its biogenesis and maturation [9,10], which are causally linked to the pathogenesis of numerous diseases, including cancer [11,12]. Several miRNA polymorphisms have been reported to affect miRNA processing or miRNA-mRNA interactions [12,13]. Single nucleotide polymorphisms (SNPs) in miRNAs can be used as genetic markers to predict breast cancer susceptibility or prognosis. For example, a significant association was identified between polymorphism rs11614913 in miR-196a2 and breast cancer risk [14]. Breast cancer patients with the variant C allele in miR-146a produced higher levels of mature miR-146, which may predispose women to an earlier age of onset of familial breast cancer [15,16]. The variant genotypes rs3746444 in miR-499 were also reported to be associated with significantly increased risks of breast cancer [17]. The rs6505162 with the CC genotype in miR-423 could reduce the risk of breast cancer development [18]. Nevertheless, some SNPs in miRNAs showed no association with breast cancer risk [19,20]. Catucci et al. reported that the SNPs rs11614913 in miR-196a2, rs3746444 in mir-499 and rs2910164 in miR-146a were not related to breast cancer risk [19]. Jedlinski's study also did not support the association of polymorphism rs11614913 in miR-196a2 with breast cancer susceptibility [20]. Thus, there are many discrepancies concerning the relationship between SNPs in miRNA (miR-146a, miR-196a2, and miR-499) and breast cancer susceptibility, which may be attributed to sample sizes, different ethnic group and different miRNAs studied. Meta-analysis is statistical methods for contrasting and combining results from different studies, in the hope of identifying sources of disagreement among those results [21]. A meta-analysis allows derivation and statistical testing of overall factors and effect-size parameters, which can identify whether a publication bias exists or whether the results are more varied than what is expected from the sample diversity. Though several meta-analysis studies evaluating the roles of miRNA gene polymorphisms in cancer have been published, few meta-analysis studies have assessed the associations of three SNPs of miR-146a, miR-196a and miR-499 with breast cancer susceptibility. Therefore, we selected these three SNPs in this meta-analysis, according to two basic principles as established in a previous study [22]: first, the minor allele frequency of the SNP was not less than 5%; Secondly, only functional SNPs were selected. This meta-analysis aimed to resolve the discrepancies among the results of the associations of these miRNAs (miR-146a, miR-196a2, and miR-499) with breast cancer susceptibility. Eligible studies and data extraction PubMed and Embases databases were searched with the following terms: ''breast cancer/carcinoma'', ''polymorphism/ variant'', ''miR-146a/rs2910164'', ''miR-196a2/rs11614913'' or ''miR-499/rs3746444''. The searched articles, published in English language updated to 31 st December, 2012, were limited to human species, female sex and cancer subjects of adult patients (19+ years). All the titles and abstracts of searched articles were reviewed to exclude clearly irrelevant studies. The full texts of the remaining articles were read, and a manual search of the references from original studies was performed to identify additional articles of the same topic. All the case-control studies were studied according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) as a previous report [23]. The specific inclusion criteria as follows: (1) Case-control studies: cases are patients newly diagnosed with breast cancer, and the controls were subjects without breast cancer; (2) Odds ratios (ORs) with their 95% confidence intervals (95% CIs) are calculated from correct and sufficient polymorphism distribution data; (3) Correct statistical analysis. The strict exclusion criteria were: (1) Pure cell studies, non-breast cancer studies; (2) Articles that are not casecontrol studies; (3) Repeated or overlapped studies; (4) Articles with obvious mistakes. Statistical analysis Deviation in the controls of all studies from HWE tests were carried out online using a web-based program (http://ihg.gsf.de/ cgi-bin/hw/hwa1.pl), and P value ,0.05 was considered significant. The ORs with their corresponding 95% CIs (homozygote comparison, heterozygote comparison, dominant model and recessive model, respectively) were calculated to analyze the associations of polymorphisms (rs2910164 in miR-146, rs11614913 in miR-196a2 and rs3746444 in miR-499) with breast cancer susceptibility. The significance of the pooled ORs was checked by the Z test, and statistical significance was defined as P value ,0.05. Cochran Q test and estimating I 2 test were used to evaluate whether the results from these studies were homogeneous [24,25]. For Cochran Q test, P value ,0.10 suggests heterogeneity among studies. As I 2 test, I 2 value ,40% indicates ''not important heterogeneity'', while a value .75% shows ''considerable heterogeneity''. If presence of heterogeneity, the random effects model (DerSimonian Laird) was chosen. Otherwise, the fixed effects model (Mantel-Haenszel) was appropriately used to calculate the pooled ORs. Publication bias was evaluated using the Begg-Mazumdar adjusted rank correlation test and the Egger regression asymmetry test, P value ,0.10 was considered as the representative of statistically significant publication bias [26,27]. Sensitivity analysis was carried to assess the stability of these results. All Statistical analyses were carried out using STATA 11.0 software (STATA Corp, College Station, TX, USA). Characteristics of studies Eligible studies were selected according to the inclusion and exclusion criteria ( Figure 1). Thirty-one records were excluded by reviewing article titles and abstracts, including 16 records that did not focus on breast cancer and 15 records that were systematic reviews. Then, 14 full texts and related reference lists were read. Five records were excluded: 2 records were not case-control studies and 3 records were breast cancer diagnosis and therapy studies. The article published by Alshatwi contained discrepancies between the data shown in the tables and the data described in the results section [14]; therefore, after consultation with the author, these data were excluded. In Catucci's [19] and Linhares's [28] studies, the genotype frequencies were presented according to the subjects' country or race, as in previous reports [29,30]; thus in the present analysis each group was considered as an independent study. Moreover, in some included articles, if two or more miRNA SNPs were investigated in an article, each miRNA SNP was considered as an independent study. Therefore, six studies, involving 4225 cases and 4469 controls, were ultimately analyzed for the SNP (rs2910164) in miR-146a [15,17,19,31,32]; eight studies, involving 4110 cases and 5100 controls, were performed for rs11614913 in miR-196a [17,19,20,22,28,32]; and three studies, involving 2588 cases and 3260 controls, were tested for rs3746444 in miR-499 [17,19], respectively. Characteristics of the included studies were shown in Table 1. These studies were published from year 2009 to 2012. The subjects came from different countries (Australia, Brazil, China, France, Germany, Italy, and USA). Ethnicity was categorized as Caucasians or non-Caucasians population. Genotyping methods included polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP), TaqMan SNP genotyping assay and MassArray multiplex. Blood samples were used for genotyping in most studies. HWE was assessed by a chi-square test. The distribution of genotypes in the controls agreed with HWE (P.0.05) in most of the studies, but some parts of the data in Catucci's [19] and Linhares's [28] studies significantly departed from HWE (P,0.05). Begg-Mazumdar adjusted rank correlation test and the Egger regression asymmetry test were used to assess the publication bias of the currently available literature. Main results The meta-analysis results of the three SNPs in the miRNAs and breast cancer risk were shown in Table 2. There were no significant associations between polymorphisms rs2910164 in miR-146a and breast cancer susceptibility for all genetic models. Because the data for the Italy population group in Catucci's study [19] significantly departed from HWE (P = 0.019), we deleted the data to analyze the associations of rs2910164 in miR-146a with breast cancer susceptibility; no significant risk associations were observed between them. When all the studies concerning SNP rs11614913 in miR-196a2 were pooled into this meta-analysis, no significant breast cancer risk was observed for any SNP genotype of miR-196a2. After excluding the Linhares's study [28], in which the distribution of miR-196a2 genotypes in controls deviated from the HWE (P = 0.008) and the included population was mixed, we found that the heterogeneities of the miR-196a2 SNP data were reduced and the genotypic results were more credible. In the comparision of genotypes (TT+CT) vs CC, obvious heterogeneity (Heterogeneity chi-square test = 17.83, P-Het = 0.013, I 2 = 60.7%) was reduced to little heterogeneity (Heterogeneity chi-square test = 6.70, P-Het = 0.244, I 2 = 25.3%). Then, the fixed effect model was used and a significant difference was observed between the (TT+CT) genotype and breast cancer susceptibility (OR 0.906, 95% CI: 0.825-0.995, P = 0.039, Figure 2). No significant risk associations with breast cancer susceptibility were demonstrated for the other SNP genotypes. Three studies of polymorphism rs3746444 in miR-499 were included in the meta-analysis. No significant risk associations with breast cancer susceptibility were revealed for any SNP of the miR-499 genotypes. No subgroup analysis was performed for the limited studies. Significant heterogeneities in the data of miR-196a2 rs11614913 SNPs were observed in Table 2. Then sources of this heterogeneity were evaluated systematically using meta-regression. The source of heterogeneity was found to be mainly related to the article publication year (t = 4.64, P = 0.004). Because the limit of the published article number, we did not perform the subgroup analysis by publication year. All the results for the three SNPs in the miRNAs obstained from random model or fixed model were similar. No publication bias was found in this meta-analysis using Begg's (P.0.05) and Egger's tests (P.0.05). Polymorphism rs2910164 in miR-146a is located in the 3p strand and comprises a G to C change, which results in a change from a G:U pair to a C:U mismatch in the stem structure of the miR-146a precursor and alters the expression of mature miR-146a to influence cancer risk [42,43]. To further explore whether miR-146a rs2910164 is associated with breast cancer susceptibility, 4225 cases and 4469 controls are investigated for miR-146a rs2910164 in this meta-analysis. Our results failed to find an association between polymorphism rs2910164 in miR-146a and breast cancer risk, similar to other studies [37,38,44], but is different to Lian's report, which showed that increased risk of breast cancer was associated with the CC genotype of rs2910164 in miR-146a in Europeans [36]. The difference between our study and Lian's study may be attributed to removing or taking the Italy population data in Catucci's study [19]. In our study, we found that the Italy population data in Catucci's study deviated from the HWE (P = 0.019) and removed this data from our meta-analysis. But, this data was calculated in Europeans in Lian's study [36]. Polymorphism rs11614913 in miR-196a2, which is located in the 39 mature sequence of miR-196a2, may affect pre-miRNA maturation [12,17]. Li et al. reported that the expression level of miR-196a was significantly higher in hepatocellular carcinoma patients with the CC genotype (or at least one C genotype) than in patients with the TT genotype [45]. Many studies showed that individuals carrying the CC genotype could suffer from significantly elevated the risk of breast cancer, lung cancer, gastric cancer, colorectal cancer and hepatocellular carcinoma compared to those with TT or TT+TC genotypes [44][45][46]. When all eligible studies were pooled into this meta-analysis, no significantly increased breast cancer risk was found. After excluding the data in which genotype distribution in the controls deviated from the HWE, the heterogeneities were reduced, revealing an association of the CC genotype of miR-196a2 SNP with an increased breast cancer risk compared with the TT+CT genotypes, which was consistent with our previous finding [47]. Our results provide the compelling evidence that polymorphism rs11614913 in miR-196a2 plays a crucial role in breast cancer development, and supports the view this SNP in miR-196a2 could be used as a candidate biomarker for the diagnosis of breast cancer risk. Polymorphism rs3746444 in miR-499 involves an A to G nucleotide substitution, which leads to a change from an A:U pair to a G:U mismatch in the stem structure of the miR-499 precursor [48]. A number of case-control studies have investigated the association of SNP in miR-499 with cancer risk in multiple types of cancer [46,48,49]. However, only a few epidemiological studies focused on the association between polymorphisms rs3746444 in miR-499 and breast cancer risk. Our meta-analysis failed to discover an obvious association between rs3746444 in miR-499 and breast cancer risk. The exact roles of miR-499 SNPs in breast cancer risk require further studies. Sample size is an important parameter for investigating the genetic effect of any SNP. Our meta-analysis provided higher and sufficient numbers of cases and controls than a single study, significantly increasing the statistical power. In addition, we assessed the qualities of the studies in this meta-analysis, which improved the reliability of the results. Although meta-analysis is robust, there are still several limitations in this study. First, our study did not evaluate any potential gene-gene interaction and gene-environment interactions. Second, our analysis was based on English publications, which may have introduced a language bias. Last, a lack of sufficient eligible studies limited further subgroup analyses. In conclusion, this study demonstrates that SNP rs11614913 in miR-196a2 plays a crucial role in the development of breast cancer. We found no significant associations of polymorphisms rs2910164 in miR-146 and rs3746444 in miR499 with breast cancer susceptibility. Well-designed studies with larger sample sizes are needed to confirm the roles of these miRNA polymorphisms in breast cancer risk.
2016-05-12T22:15:10.714Z
2013-09-09T00:00:00.000
{ "year": 2013, "sha1": "5202f17b3a4e0dcfaf4ccfb84584df1d5d594b44", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0070656&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5202f17b3a4e0dcfaf4ccfb84584df1d5d594b44", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10620556
pes2o/s2orc
v3-fos-license
Larval Competition Reduces Body Condition in the Female Seed Beetle, Callosobruchus maculatus Early body condition may be important for adult behavior and fitness, and is impacted by a number of environmental conditions and biotic interactions. Reduced fecundity of adult females exposed to larval competition may be caused by reduced body condition or shifts in relative body composition, yet these mechanisms have not been well researched. Here, body mass, body size, scaled body mass index, and two body components (water content and lean dry mass) of adult Callosobruchus maculatus (Fabricius) (Coleoptera: Chrysomelidae: Bruchinae) females exposed to larval competition or reared alone were examined. Experimental females emerged at significantly smaller body mass and body size than control females. Additionally, scaled body mass index and water content, but not lean dry mass, were significantly reduced in experimental females. To our knowledge, these are the first results that demonstrate a potential mechanism for previously documented direct effects of competition on fecundity in female bruchine beetles. Introduction Early body condition has important consequences for fitness (Thornton 2008) and is determined during the crucial time of early life, defined as the period from conception to maturity (Henry and Ulijaszek 1996). From the standpoint of the growing individual, optimal environmental conditions for development include, for example, an abundance of high quality food and space as well as ideal temperatures, humidity, and/or lighting conditions (Prout and McChesney 1985;Vamosi and Lesack 2007;Schirmer et al. 2008). Factors that may result in suboptimal conditions include exposure to predators (Brodin et al. 2006;Wohlfahrt et al. 2007;Mikolajewski et al. 2008), sexual conflict (Abbott et al. 2010), pollution or feces (Bedhomme et al. 2005), and stress (Shoemaker et al. 2006;Shoemaker and Adamo 2007). Adverse conditions encountered during development can have significant negative impacts on mass at birth or emergence (Metcalfe and Monaghan 2001), metabolic rate (Verhulst et al. 2006), and disease resistance (Reilly and Hajek 2008). Thus, poor early body status may reduce fitness through reduced survival and/or reproductive success (Lindström 1999). The fitness potential of holometabolous adult insects is often influenced primarily during larval development by resource availability and acquisition ability (Boggs and Freeman 2005). Callosobruchus maculatus (Fabricius) (Coleoptera: Chrysomelidae) is a holometabolous insect with larval and pupal stages confined within a bean, which may be shared by several individuals (Ofuya and Agele 1989;Messina and Tinney 1991), followed by a free-living adult form. Because C. maculatus do not need to feed or drink as adults to successfully reproduce (e.g., Fox 1993), one can experimentally isolate effects of larval conditions on adult fitness. Furthermore, because there is no parental care, reproductive success is tightly correlated with number and quality of eggs laid. Although the presence of a single competitor may only reduce body mass of C. maculatus females (Colegrave 1993), subsequent studies have revealed that females experiencing higher levels of larval competition tend to have a lower body mass upon emergence and lay fewer eggs for their mass than control females (Vamosi 2005;Vamosi and Lesack 2007). The latter results suggest that competition may affect fecundity independent of any effects on mass, but tests of proximate mechanisms are currently lacking. Prior to proceeding, however, we note that there has been considerable debate of late regarding the way in which body condition is estimated (e.g., Green 2001;Schulte-Hostedde et al. 2005;Green 2009, 2010). Traditionally, the effects of body size on body mass would first be "controlled for" by obtaining residuals from this regression and then conducting a one-way analysis of variance (ANOVA) on these residuals (i.e., estimates of body condition; Schulte-Hostedde et al. 2005), using competition treatment as the binary predictor variable. However, it has been pointed out (e.g., Green 2001;Green 2009, 2010) that this method generates biased parameter estimates and, more generally, that the use of residuals as data should be limited to post-hoc diagnosis of model fits (e.g., García-Berthou 2001;Freckleton 2002). More recently, a new approach based on allometric scaling has been proposed Green 2009, 2010; see Materials and Methods for an overview). This method appears especially preferable when attempting to compare body condition of groups that differ in mean size but it has not been previously applied to insects. Here, it was investigated whether larval competition affects body condition of adult females. Females exposed to larval competition were predicted to have lower body condition than those reared alone. It was also tested whether larval competition affects individual body components of adult females. Because adult females exposed to larval competition during development may lay fewer eggs than predicted for their body mass, an associated reduction in relative water content in experimental females was predicted. Study organism The 'hQ' strain of C. maculatus, which displays a scramble competition strategy in the larval stage (i.e., if several eggs are laid on a single bean, multiple adults may emerge), was used. Stock cultures of beetles were reared on adzuki beans (Vigna angularis) and maintained at 28 ºC, 50% RH and 24-hour dark conditions in Percival I33LLC8 growth chambers (www.percival-scientific.com). Competition treatments Several hundred adults from the stock culture were allowed to mate and oviposit on beans for 48 hours. Because ovipositing females may be able to recognize low quality beans (Mitchell 1975;Vamosi 2005), each bean was examined after 48 hours and only those with at least three eggs attached were retained for further use. Following previous studies (e.g., Vamosi 2005), two treatments were established (hereafter 'experimental' and 'control') in which females differed in the intensity of larval competition they experienced during development. Larval competition was manipulated by scraping off unwanted eggs before the larvae hatched and burrowed into beans. Although this method is relatively labor intensive, it avoids confounding effects potentially introduced by having two groups of parental females (i.e., few females on many beans to produce the control group, many females on few beans to produce the experimental treatment). Beans were randomly assigned to the two treatments. Approximately 150 beans had their eggs reduced to one egg per bean, using a scalpel to remove excess eggs. Beans were individually placed in 1.5 mL microcentrifuge tubes, with a small hole punctured in the lid for respiration, for incubation until emergence. This procedure ensured that adults emerging in the control treatment would have experienced no larval competition. For the competition treatment, approximately 250 beans were handled, without the removal of any eggs, and individually placed in similarly prepared 1.5 mL microcentrifuge tubes. More beans were isolated for the competition treatment because pilot studies revealed that the likelihood of a single egg on a bean producing one emerging adult was greater than that of several (i.e., three or more) eggs producing at least three emerging adults. Beginning 20 days after oviposition, tubes were checked daily for the control group and several times a day for the experimental treatment. Once emergence began, adult females were isolated in microcentrifuge tubes. To ensure that all the females from the experimental treatment were unmated, only females found alone or with other females were considered. All males, as well as females found to have emerged in the same time interval as a male, were recorded and discarded. Once an experimental female was isolated, the level of competition experienced by that female was determined by dissecting the bean to examine it for pupae or adults that had not yet emerged. To ensure larvae from the competition treatment experienced measurable effects of competition (cf. Vamosi 2005), only females reared with at least two other individuals that were minimally in the pupal stage when the female emerged were retained. Sample sizes were N = 30 for both treatment groups. Body components Procedures for obtaining body component measures followed those of Keller and Passera (1989). Within 24 hours of emergence, females were placed in sealed vials containing a swatch of paper towel wetted with ethyl acetate. The vapor killed the females within minutes and they were subsequently removed with forceps and measured for wet mass (hereafter, body mass) to the nearest 0.01 mg using a Sartorius balance (www.sartorius.com). Immediately upon obtaining body mass of females, three linear body measurements (right elytron length, right elytron width, and pronotum width) were obtained using a Leica microscope (www.leica-microsystems.com). Females were then placed in individual 10 mL glass screw top vials supported within a test tube rack and dried at 70.6 ± 0.4 ºC in a Fisher Scientific Isotemp Oven (www.fischersci.com) for 24 hours. To limit the absorption of atmospheric moisture, dry mass of females after water removal was obtained within 15 min of removal from the oven, which was subtracted from body mass to obtain water content. Females were returned to their individual vials and 10 mL of petroleum ether was injected with a syringe into each vial before being returned to the oven for an additional 24 hours. Females were removed from the vials with forceps and placed in clean vials followed by a second 24hour period of drying. To limit the absorption of atmospheric moisture, lean dry mass was measured for all females within 15 min of removal from the oven. Because no experiments were carried out to ensure that all fat was removed by the procedure (see O'Donnell and Jeanne 1995), results of fat content analyses are not reported. Statistical analyses Although our aim is not to critique the various methods, it was necessary to choose one a priori, rather than applying both and presenting the one that produced "significant" results. Because body size is often lower on average in competition females (e.g., Vamosi 2005), this raised the possibility that the slope of the relationship between size and mass would differ between control and experimental groups. Attempting to apply an ordinary least squares approach in such a scenario is problematic whether one assumes a constant slope (because there is evidence that the relationship between size and mass is actually curvilinear; Peig and Green 2010) or allows for two slopes (because the mean of the residuals for each group will necessarily be zero). Following Green (2009, 2010), three main steps were undertaken to obtain a 'scaled mass index' of body condition (hereafter, scaled body mass index) for individuals. First, the body size measurement that was most strongly correlated with body mass was determined. All three linear body measurements and also the first principal component from a principal components analysis (PCA) that included these body measurements (see also Schulte-Hostedde et al. 2005;Colgoni and Vamosi 2006) were included. In agreement with Green (2009, 2010), one of the single linear body measurements (i.e., right elytron length), and not Principal Component 1 from the PCA, was most strongly correlated with body mass (r = 0.76, t 58 = 9.04, p < 0.01). Second, lntransformed right elytron length was regressed against ln-transformed body mass with standardized major axis regression, to obtain the slope estimate of this relationship (b SMA ). RMA for Java v. 1.21 (Bohonak and van der Linde 2004) was used for this procedure. Finally, the scaled body mass index (M i ; mg) for each individual was calculated with: where M i and L i are the body mass and right elytron length of individual i respectively, and L 0 is the arithmetic mean value for the sample (= 2.01 mm). The effect of the competition treatment on scaled body mass index was analyzed with one-way ANOVA. Correlations between scaled body mass index and scaled body components (water content and lean dry mass) for experimental and control females were calculated. To account for multiple comparisons, a correlation was deemed significant only when p < /4 = 0.0125. Scaled body components were obtained in the same way as described for scaled body mass index, substituting the appropriate body component for body mass in each case. First, multivariate ANOVA (MANOVA) was applied, followed by subsequent univariate ANOVAs for each body component. Analyses of correlations and treatment effects were conducted with R 2.12.1 (R Development Core Team 2010). Results Experimental females emerged at a significantly lower mean body mass (mean: 5.78 vs. 7.05 mg; F 1,58 = 27.42, p < 0.01; Figure 1) and smaller body size (F 1,58 = 4.90, p < 0.05) than control females. The slope of the relationship between ln-transformed right elytron length and ln-transformed body mass also differed markedly between the two groups (mean ± SE: control females, b SMA = 3.24 ± 0.38; experimental females, b SMA = 2.42 ± 0.30). One-way ANOVA on scaled body mass index values revealed that experimental females had significantly lower values than control females (F 1,58 = 14.61, p < 0.01), with a mean reduction of 9.5% ( Figure 2). The findings of reduced mean body mass and scaled body mass index (i.e., body condition) suggest that negative physiological effects of competition were successfully attained by our protocol (see also Vamosi 2005;Vamosi and Lesack 2007). All four correlations between body condition and scaled body components were significant, even accounting for multiple comparisons ( Table 1). In both groups, the ranking of the correlation between scaled body mass index and body components was the same as observed for five mammal species in Peig and Green (2009; based on original data from Schulte-Hostedde et al. 2005) i.e., water > lean dry mass. Although MANOVA was not significant (F 3,56 = 1.93, p = 0.13), subsequent univariate tests revealed a significant negative effect of competition treatment on scaled water content (F 1,58 = 5.05, p < 0.05) and a nonsignificant negative effect on lean dry mass (F 1,58 = 3.51, p = 0.066). Discussion Extending previous studies that demonstrated a reduction in mass-corrected number of eggs laid by females exposed to larval competition (Vamosi 2005;Vamosi and Lesack 2007), scaled body condition and body components of control and experimental females were analyzed. Experimental females were predicted to have reduced body condition and reduced water content compared to control females. Females that were reared with at least two other individuals while developing in medium-sized beans (i.e., experimental females) were significantly lighter and smaller at emergence than those reared alone (i.e., control females), in agreement with previous studies (e.g., Colegrave 1993, Vamosi 2005. Additionally, a significant reduction in body condition was observed, measured as scaled body mass index Green 2009, 2010), in experimental females. With regard to body components, there was a significant reduction in water content (mean effect = 6.7%) and a marginal reduction in lean dry mass ( 7.3%) in experimental females. To our knowledge, this is the first investigation and documentation of potential mechanisms that may cause previously documented direct negative effects of competition on fecundity in bruchine beetles (Vamosi 2005;Vamosi and Lesack 2007). Water availability has been shown to affect various aspects of the biology of bruchine beetles. Bruchine beetles are classified as being xerophilic (i.e., able to grow and reproduce without access to free water; Appel et al. 2009), although they will drink free water and lap at sugar-water (e.g., Fox and Moya-Laraño 2009;D Schade and SM Vamosi, pers. obs.). Contrary to expectations, female bruchine beetles do not preferentially lay eggs on high moisture seeds, although the apparent preference for dry seeds may simply be because the latter have reduced chemical defenses (Hudaib et al. 2010). Availability of water has been demonstrated to have significant effects on the mating behavior of adult C. maculatus females (Edvardsson 2007;Ursprung et al. 2009;Fox and Moya-Laraño 2009). Females provided with access to free water have been observed to mate less frequently than those deprived of water (Edvardsson 2007;Fox and Moya-Laraño 2009). Water, rather than nutrient content, in the ejaculate has been suggested to modulate remating frequency in adult females (Ursprung et al. 2009). Access to water may be associated with significant positive effects on fecundity and longevity of females, although both effects appear strongest when water is provided in combination with sugar (Ursprung et al. 2009;Fox and Moya-Laraño 2009). Together, these observations suggest that the reduction in water content of experimental females documented in the present study may translate into biologically relevant consequences for their mating behavior and fecundity. Evaluating the reception of the scaled mass index method for estimating body condition is currently difficult, given the lack of studies that have cited Green (2009, 2010) thus far. However, three observations suggest that it may be a robust methodology for similar studies in future. First, the slopes of the relationship between ln-transformed right elytron length and ln-transformed body mass for C. maculatus females corresponded well to values reported in Table 2 of Peig and Green (2009) for seven vertebrate species (median: 2.9, range = 1.4 to 3.6). Second, correlations between body condition and scaled body components (Table 1) were similar in magnitude to the mean values (lean dry mass: 0.84; water: 0.91) computed for the five mammal species reported in Table 3 of Peig and Green (2009). Finally, and most significantly, this methodology allowed for the comparison of body condition of two groups (experimental vs. control females) that differed in the slope of the relationship between size and mass. It is likely that the method defended by Schulte-Hostedde et al. (2005) will continue to be appropriate in many instances, but we suggest researchers consider applying Green's (2009, 2010) scaled body mass index for estimating body condition whenever there is an a priori reason to suspect that the groups being compared will differ in mean body mass and/or body size. Changes in either or both traits are certainly commonly observed in response to competition (Colegrave 1993;Boggs and Freeman 2005;Vamosi 2005;Harvey et al. 2009), but may also be triggered by variation in several other factors, including temperature (Marti and Carpenter 2008) and resource type (Ueno 2003). Because Callosobruchus is increasingly being used as a model organism in several areas of ecology and evolution (e.g., Fox 1993; Crudgington and Siva-Jothy 2000; Arnqvist and Tuda 2010), future investigations should explicitly examine the consequences for different environmental conditions encountered during development on adult behavior and fitness. The present study could be extended in several ways, from comparative and life history perspectives. For the former, because only a single scramble strain was considered, it may be informative to investigate whether similar patterns hold for multiple contest and competition strains. For the latter, body condition could be noninvasively measured (i.e., by measuring only body length and body mass of females upon emergence), which could be included as a covariate in subsequent analyses of mating frequency, longevity, and mass-corrected fecundity of competition vs. control females. Finally, most studies have considered these phenomena in females, whereas the effects on males have been relatively understudied. In conclusion, exposure to larval competition during development resulted in adult C. maculatus females with significantly lower body mass, body size, scaled body mass index (i.e., body condition), and water content than control females. These results are the first to provide a potential mechanism for reduced mass-corrected fecundity in females exposed to competition during larval development, and corroborate previous demonstrations of a potential positive effect of access to free water on longevity and fecundity in bruchine beetles.
2016-05-12T22:15:10.714Z
2012-03-14T00:00:00.000
{ "year": 2012, "sha1": "bdba0f141df4274e73d895ddcfcfcae97aa86aa8", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/12/1/35/18150092/jis12-0035.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bdba0f141df4274e73d895ddcfcfcae97aa86aa8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246485969
pes2o/s2orc
v3-fos-license
Intertwined leukocyte balances in tumours and peripheral blood as robust predictors of right and left colorectal cancer survival BACKGROUND Colorectal cancer (CRC) accounts for 9.4% of overall cancer deaths, ranking second after lung cancer. Despite the large number of factors tested to predict their outcome, most patients with similar variables show big differences in survival. Moreover, right-sided CRC (RCRC) and left-sided CRC (LCRC) patients exhibit large differences in outcome after surgical intervention as assessed by preoperative blood leukocyte status. We hypothesised that stronger indexes than circulating (blood) leukocyte ratios to predict RCRC and LCRC patient outcomes will result from combining both circulating and infiltrated (tumour/peritumour fixed tissues) concentrations of leukocytes. AIM To seek variables involving leukocyte balances in peripheral blood and tumour tissues and to predict the outcome of CRC patients. METHODS Sixty-five patients diagnosed with colon adenocarcinoma by the Digestive Surgery Service of the La Paz University Hospital (Madrid, Spain) were enrolled in this study: 43 with RCRC and 22 with LCRC. Patients were followed-up from January 2017 to March 2021 to record overall survival (OS) and recurrence-free survival (RFS) after surgical interventions. Leukocyte concentrations in peripheral blood were determined by routine laboratory protocols. Paraffin-fixed samples of tumour and peritumoural tissues were assessed for leukocyte concentrations by immunohistochemical detection of CD4, CD8, and CD14 marker expression. Ratios of leukocyte concentration in blood and tissues were calculated and evaluated for their predictor values for OS and RFS with Spearman correlations and Cox univariate and multivariate proportional hazards regression, followed by the calculation of the receiver-operating characteristic and area under the curve (AUC) and the determination of Youden’s optimal cutoff values for those variables that significantly correlated with either RCRC or LCRC patient outcomes. RCRC patients from the cohort were randomly assigned to modelling and validation sets, and clinician-friendly nomograms were developed to predict OS and RFS from the respective significant indexes. The accuracy of the model was evaluated using calibration and validation plots. RESULTS The relationship of leukocyte ratios in blood and peritumour resulted in six robust predictors of worse OS in RCRC: CD8+ lymphocyte content in peritumour (CD8pt, AUC = 0.585, cutoff < 8.250, P = 0.0077); total lymphocyte content in peritumour (CD4CD8pt, AUC = 0.550, cutoff < 10.160, P = 0.0188); lymphocyte-to-monocyte ratio in peritumour (LMRpt, AUC = 0.807, cutoff < 3.185, P = 0.0028); CD8+ LMR in peritumour (CD8MRpt, AUC = 0.757, cutoff < 1.650, P = 0.0007); the ratio of blood LMR to LMR in peritumour (LMRb/LMRpt, AUC = 0.672, cutoff > 0.985, P = 0.0244); and the ratio of blood LMR to CD8+ LMR in peritumour (LMRb/CD8MRpt, AUC = 0.601, cutoff > 1.485, P = 0.0101). In addition, three robust predictors of worse RFS in RCRC were found: LMRpt (AUC = 0.737, cutoff < 3.185, P = 0.0046); LMRb/LMRpt (AUC = 0.678, cutoff > 0.985, P = 0.0155) and LMRb/CD8MRpt (AUC = 0.615, cutoff > 1.485, P = 0.0141). Furthermore, the ratio of blood LMR to CD4+ LMR in peritumour (LMRb/CD4MRpt, AUC = 0.786, cutoff > 10.570, P = 0.0416) was found to robustly predict poorer OS in LCRC patients. The nomograms showed moderate accuracy in predicting OS and RFS in RCRC patients, with concordance index of 0.600 and 0.605, respectively. CONCLUSION Easily obtainable variables at preoperative consultation, defining the status of leukocyte balances between peripheral blood and peritumoural tissues, are robust predictors for OS and RFS of both RCRC and LCRC patients. performed image analysis; Pulido E performed sample analysis and assisted with image analysis; Pascual-Iglesias A performed statistical analysis; Casalvilla JC assisted with sample analysis and with image analysis; Valentín J assisted with sample analysis and with image analysis; Bonel-Pérez GC assisted with sample analysis and with image analysis; Avendañ o-Ortiz J assisted with sample analysis; Terrón V assisted with sample analysis; Lozano-Rodrí guez R assisted with sample analysis; Martín-Quirós A participated in patient recruitment; Marín E performed sample analysis; Guevara J participated in patient recruitment, surgery, sample collection, and assisted with data analysis; Marcano C participated in patient recruitment, surgery, sample collection, and assisted with data analysis; Barragá n C participated in patient recruitment, surgery, and sample collection; Pena E was involved with sample collection and assisted with anatomo-pathological analyses; Guerra-Pastrián L was involved with sample collection and performed anatomopathological analyses; López-Collazo E participated in the design and oversight of the study and drafted the manuscript; Aguirre LA designed and supervised the study, assisted with data analysis, and drafted the manuscript; all authors read and approved the final manuscript. INTRODUCTION Despite the great medical and scientific achievements attained over the last decades in the fields of cancer understanding, early detection, and care, cancer continues to be a majorly threatening disease worldwide. Amongst the many pathologies gathered under this term, colorectal cancer (CRC) accounts for 9.4% of overall cancer deaths, ranking second just after lung cancer [1]. CRC treatments vary depending on tumour location and stage of diagnosis; standard colectomy (along with lymphadenectomy) without adjuvant therapy is the usual treatment in early stages I and II, while most patients in advanced stages III and IV follow with chemo-and/or radiotherapy to reduce the risk of recurrence [2]. However, a large proportion of these patients present with (synchronous; 15%-25%) or will develop (metachronous; 40%-75%) metastases, mainly in the liver [3], which constitutes the major cause of deaths [4]. Therefore, a 5year relative survival rate is reduced from 90% in early-stage detection to 12% in advanced cases [2]. Thus, finding robust markers before surgery to predict patient outcomes constitutes a safe strategy in order to stratify those groups with a high risk of recurrence and design personalised pre-and postoperative therapies. A wide variety of factors, mainly based on clinical and pathological features, have been tested as prognostic markers for CRC development, such as: weight loss, haemoglobin levels, tumour-nodes-metastasis classification (TNM) staging and tumour differentiation, mismatch-repair proficiency, lymph node involvement, or response to (neo-) adjuvant therapies [5][6][7]. Moreover, since a clear distinction between the behaviour of right-sided CRC (RCRC) and left-sided CRC (LCRC) patients is well established, much effort has been put into categorising putative prognostic markers according to their respective characteristics, though still with controversial results [8]. Currently, an increasing number of research and clinical trials are supporting evidence of the influence of the systemic inflammatory response in cancer progression [5]. A measure of this response has been assessed by combining the number of peripheral circulating leukocytes: lymphocyte-to-monocyte ratio (LMR), neutrophil-tolymphocytes ratio (NLR), and platelet-to-lymphocyte ratio (PLR). These analyses have shown interesting prognostic associations in several cancer types including urothelial, nasopharyngeal, osteosarcoma, lung carcinomas [9][10][11][12], and CRC [13][14][15][16]. Nevertheless, few studies have been directed towards the prognostic value of intertwined relationships across circulating and tumour-infiltrated populations of leucocytes on solid tumour progression [17][18][19]. Herein, we aimed to delve deep into the prognostic value of leukocyte distribution ratios, in both blood and tumour tissues, for CRC patient outcomes after surgery. We hypothesised that stronger indexes than circulating (blood) leukocyte ratios to predict patient outcome will result from combining both circulating and infiltrated (tumour/peritumoural tissues) concentrations of leukocytes. We show six robust predictors for RCRC overall survival (CD8 pt , CD4CD8 pt , LMR pt , CD8MR pt , LMR b / LMR pt , LMR b /CD8MR pt ), three for RCRC recurrence-free survival (LMR pt , CD8MR pt , LMR b /LMR pt , LMR b /CD8MR pt ), and another one for LCRC overall survival (LMR b /CD4MR pt ), all these being based on the ratios between blood and peritumoural tissue concentration of lymphocytes and monocytes. Moreover, we highlight the importance of these variables in designing ad hoc surgical strategies, due to the ease with which surgeons can build a protocol by taking samples of peripheral blood and peritumoural tissue during a preoperative colonoscopy. Patient selection Sixty-five patients diagnosed with colon adenocarcinoma, with no records of previous neo-adjuvant therapy, were recruited at the Digestive Surgery Service of La Paz University Hospital (Madrid, Spain) from January 2017 to September 2019. They were surgically treated according to each patient's condition for right (caecum, ascending, or transverse colon) or left (descending or sigmoid colon) hemicolectomies followed by Exclusion criteria Only patients with adenomas or rectum adenocarcinoma were excluded from the study. Blood tests Venous blood samples were collected in 10 mL EDTA-tubes in the hospital room, 24 h prior to surgery and routinely tested for white blood cell, lymphocyte (L), monocyte (M), neutrophil (N) and platelet (P) counts at the Central Laboratory (CORE) of the La Paz University Hospital. Preoperative blood LMR (LMR b ), NLR (NLR b ), and PLR (PLR b ) were then calculated for each patient by dividing the absolute counts of the respective populations in the peripheral blood (Table 1). Tissue preparation Samples from the middle part (avoiding both the epicentre and the edge) of the tumours, 5 cm-adjacent peritumoural (non-neoplastic), and liver (in case of synchronous metastases) tissues were taken at the time of surgery, upon in situ evaluation of morphological characteristics by pathologists. Histological types and grades were based on microscopic features. Microsatellite stability analyses were performed as previously described [20]. Organ samples were washed with PBS solution containing 56 μg/mL gentamicin (Braun, Melsungen, Germany; 636159), 2.5 μg/mL fungizome/anphotericin-B (Gibco, Amarillo, TX, United States; 15290-018), and 1% penicillin/streptomycin (Sigma-Aldrich, Saint Louis, MO, United States; P4333-100mL) and gently shaken for 30 min at room temperature. Then they were fixed in 4% paraformaldehyde for 16 h, washed with PBS for 24 h, and paraffin-embedded by standard procedures. Immunohistochemistry and image analysis Thin sections (5 μm thick) of TMAs were cut with a Leica (RM2255) ultrathinmicrotome and allowed to completely adhere to slides for 30 min at 60°C, before staining with commercially available antibodies against assessed surface markers was performed by standardised protocols (see Supplementary Table 1 for a complete list of primary and secondary antibodies used). Briefly, sections were deparaffinised with xylene, rehydrated through graded (100% to 70%) ethanol, and blocked for endogenous peroxidase by immersion in 97% methanol. Next, sections were immersed in heated sodium citrate buffer (10 mmol/L, pH 6.0) for antigenicity recovery and then incubated in unspecific-binding blocking solution [TBS solution containing 1% BSA, 1% Triton X-100 (Thermo Scientific; Waltham, MA, United States, 85111) and 2.5% horse serum (Gibco; Amarillo, TX, United States, 26050088)]. Primary antibodies were then added at recommended dilutions and incubated overnight at 4°C in a humid chamber. After washing slides with TBS, matched HRP-secondary antibodies were added and incubated for 1 h at room temperature. Then, DAB chromogen (DAB substrate kit, Cell Marque; Rocklin, CA, United States, 1-957D-30) was added for a few seconds until colour change and gently washed with TBS and distilled water. Finally, sections were counterstained by immersion in haematoxylin, dehydrated through graded (70% to 100%) ethanol, and mounted with DPX medium (Sigma-Aldrich; Saint louis, MO, United States, 06522). An average of four photographs per sample (in order to cover the whole field for each sample on the TMA sections) were taken with an Olympus BX-41 microscope and blind-analysed by two independent observers with ImageJ (v1.52p), for the calculus of the relative areas to each antibody corresponding surface marker expression (CD4, CD8, and CD14). For a detailed description of the image processing see Supple- mentary Figure 1A. A percentage of the total tissue area (A) for the three surface markers, in each patient's tumour and peritumour samples, was reported as the mean of all their relative areas per field. Total tumour and peritumour LMRs (respectively, LMR t and LMR pt ) were calculated by dividing the sum of the areas for CD4 and CD8 by the area for CD14, e.g., LMR t =(A(CD4 t )+A(CD8 t ))/A(CD14 t ). Individual subpopulation ratios were also analysed for both tumour and peritumour samples (CD4MR t , CD8MR t and CD4MR pt , CD8MR pt , respectively), e.g., CD4MR t =A(CD4 t )/A(CD14 t ). Then, blood-to-tissue ratios for all previous tumour and peritumour subpopulation ratios (LMR b /LMR t , LMR b /CD4MR t , LMR b /CD8MR t and LMR b /LMR pt , LMR b /CD4MR pt , LMR b /CD8MR pt , respectively) were also reported for each patient. Nomogram construction and validation All RCRC patients from the cohort were randomly divided into training (60%) and validation (40%) sets to establish and validate the clinician-friendly nomograms. For each nomogram to predict the probability of OS or RFS, the six or the three respectively significant predictive factors found early were used to formulate the nomograms with several R packages. The discriminatory ability of the nomogram was assessed by calculating the Harrell's concordance index (C-index). Statistical analysis Data are represented as mean ± standard deviation. Student's t test was used for pairwise comparisons. Mann-Whitney U analysis was applied for equal standard deviations, otherwise Welch's correction was used. The distribution of the variables was assessed by a nonparametric test. Spearman r correlations were used to evaluate the association between the variables and ratios with the OS and RFS observed in our patients. Survival and population ratio relationships were analysed using Cox proportional hazard ratios; statistically significant variables in univariate analysis were further evaluated with the Cox multivariate step-by-step backward method to identify those with independent prognostic value. The Kaplan-Meier method was used to calculate the differences in OS and RFS rates for RCRC and LCRC over time (months), and significance was compared using the log-rank (Mantel-Cox) test; median time (months) survival proportions and P accuracy were reported. We calculated the receiver-operating characteristic (ROC) curve and the area under the curve (AUC) to determine whether the different variables and ratios could be used to predict OS and RFS in our cohort. We indicated the sensitivity, the specificity, the positive and negative predictive values, and 95% confidence interval for AUC and P accuracy. Optimal cutoff values, as determined with Youden's index, Harrell's C-index, and P accuracy, were calculated with R software. P values of 0.05 or less were considered indicative of statistical significance, and all these were two-sided. All statistics were performed in either Prism 6.0 (GraphPad, San Diego, CA, United States) or SPSS January Cohort baseline characteristics The cohort included in this study was exclusively recruited by one team of surgeons, from their assigned patients for surgically treated disorders of the digestive tract, thus only a fraction is constituted of the whole figure of CRC patients attended at La Paz University Hospital during the period of recruitment. Detailed clinicopathological characterisation of patients is shown in Table 1. A total of 65 patients with a mean age of 73.5 years, of whom 43 (66.1%) presented with RCRC and 22 (33.8%) with LCRC, were finally enrolled. Of these, 29 (44.6%) were women and 36 (55.4%) were men. With the exception of one case, all had been programmed for surgery without an emergency condition. Forty-eight (73.8%) were hemicolectomised by minimally invasive laparoscopic procedure. They ranged from stages 0 to IV, based on TNM classification; 28 (43.1%) were presenting metastasis (either synchronous or metachronous at the time of surgery), and 30 (46.1%) received adjuvant therapy after surgery. Fifty-six (86.1%) of the tumours were found proficient for the mismatch-repair machinery at the histological level. Patient progression follow-up The survival analysis, with a median follow-up of 26 mo, showed no differences for OS between RCRC and LCRC patients ( Figure 1A) but a trend towards poorer outcome for the latter (74.5% vs 40.8%, P = 0.1875). However, in the analysis of RFS ( Figure 1B), we observed significantly better outcomes for RCRC compared to LCRC patients (60.4% vs 19.1%, P = 0.0036). Leukocyte counts and ratios We found no differences (Table 1) in total leukocyte counts nor in individual populations of circulating lymphocytes, monocytes, neutrophils, or platelets between RCRC and LCRC patient peripheral blood. However, though all mean counts for both groups were within the normal physiological ranges, RCRC patients showed a trend towards low circulating lymphocytes. Thus, their LMR b was lower (P = 0.0462) than LCRC patients ( Figure 2A). Neither NLR b nor PLR b showed differences between RCRC and LCRC patients ( Figure 2B and C). Tissues from 54 out of the total 65 patients included in the study, 34 from RCRC patients (63%) and 20 from LCRC patients (37%), could be assessed for leukocyte infiltration analyses. This fact was mainly due to the morphological characteristics of 11 tumours, which made it impossible to separate pieces for research purposes without affecting the global diagnostics by pathologists. Figure 3 shows the staining pattern for CD4, CD8, and CD14 cells in tumour and peritumour samples from two representative patients of LCRC and RCRC. The distribution of total (CD4 + plus CD8 + ) lymphocytes, CD4 + lymphocytes, CD8 + lymphocytes, and CD14 + monocytes, in all analysed tissues, is shown in Supplementary Figure 2. Higher total lymphocyte content in tumours than peritumours from LCRC patients (13.06 ± 2.123 vs 7.57 ± 1.794, P = 0.0095) seemed due to the proportional increase of CD8 + lymphocytes (11.19 ± 2.158 vs 5.13 ± 1.757, P = 0.0020), as we detected no differences amongst infiltrated CD4 + lymphocytes in these tissues. No differences were found for lymphocyte infiltration in right tumours with respect to right peritumoural tissues. Moreover, infiltrated-leukocyte content in right tumours showed no differences to right peritumours. Next, the effect of these variables on survival was assessed by Cox proportional hazards regression. For OS (Table 3), the univariate analysis revealed that besides previously found LMR b (P = 0.043), LMR pt (P = 0.024), and CD8MR pt (P = 0.031) in RCRC patients, NLR b (P = 0.038) also significantly correlated with OS; LMR b /CD4MR pt (P = 0.026) was also confirmed to be significantly correlated with OS of LCRC patients. After adjusting for confounding variables through the multivariate analysis, NLR b (P = 0.038), CD8MR pt (P = 0.011), and LMR b /CD8MR pt (P = 0.016) resulted in a significant association with OS of RCRC patients; CD8 pt (P = 0.058) also showed a trend towards being associated. Nomograms modelling and validation In order to avoid conflicts in handling the different values of the predictive indexes for RCRC patients, clinician-friendly nomograms were developed for both OS ( Figure 7A) and RFS ( Figure 7B) of these patients. The six significant predictive variables found for OS and the three found for RFS were used to construct the respective nomograms, with data from the training set of RCRC patients. The calibration of these nomograms LMR: Lymphocyte-to-monocyte ratio; NLR: Neutrophil-to-lymphocytes ratio; PLR: Platelet-to-lymphocyte ratio; HR: Hazard ratio; CI: Confidence interval; b: Blood; t: Tumour; pt: Peritumour; CD4MR: CD4 + -lymphocyte-to-monocyte ratio; CD8MR: CD8 + -lymphocyte-to-monocyte ratio. DISCUSSION The segment of the large intestine proximal to the splenic flexure, i.e. the right colon (comprising caecum, ascending colon, and proximal two-thirds of the transverse colon), derives from the embryonic midgut; whereas the left colon (comprising the distal third part of the transverse colon and descending and sigmoid colon) derives from the embryonic hindgut [21]. Distinct embryologic origin of right and left sides of the colon markedly determines important physiological differences, mainly: cell motility, vasculature, lymphatic drainage, extrinsic innervation, development of the endocrine components, and the expression and patterns of epigenetic marks of crucial molecular factors for cell development [21,22]. Since seminal contributions by Bufill et al [23], an increasing number of studies have supported the hypothesis that these differences in origin may explain why RCRC and LCRC constitute two distinct clinical entities, which arise through different , represented as LMR t/pt (D), CD4 + -lymphocyte-to-monocyte ratio (CD4MR) t/pt (E), and CD8 + -lymphocyte-to-monocyte ratio (CD8MR) t/pt (F) ( a P < 0.05, b P < 0.01, unpaired Mann-Whitney U test, data are mean ± standard deviation); G-I: Blood-to-tissue leukocyte ratios for right-sided CRC tumours (t, orange, n = 34) and peritumours (pt, light red, n = 34) and left-sided CRC tumours (t, green, n = 20) and peritumours (pt, light green, n = 20) represented as LMR b /LMR t/pt (G), LMR b /CD4MR t/pt (H), and LMR b /CD8MR t/pt (I) ( a P < 0.05, b P < 0.01, unpaired Mann-Whitney U test, data are mean ± standard deviation). PLR: Platelet-to-lymphocyte ratio. pathogenetic mechanisms [22,24,25]. Thus, differential aspects such as incidence, presentation, microbiome composition, genetic burden, or immunogenicity could be explained on these grounds [26][27][28][29][30][31]. In a large study with more than 17000 CRC patients, Benedix et al [32] showed that RCRC represents a more distinct tumour entity than LCRC, mainly because of its higher incidence in women and older people, poor differentiation, locally advanced carcinomas, a distinct pattern of metastatic spread, and worse outcome. Likewise, survival after surgical intervention to remove the tumour should constitute a prominent feature to differentiate both pathologies. In this line, controversial results arise throughout the literature. Thereby, some studies support RCRC patients having poorer overall and disease-free survival rates[8], whilst others call attention to the stage of the disease, with better rates for RCRC being limited to stage II and better rates for LCRC being limited to stage III [33]. In our cohort, perhaps due to the stage's heterogeneity of the patients, both OS and RFS were found side-dependent, with better outcomes in RCRC patients, reinforcing the idea that prognostic markers for the two pathologies should be studied separately. A number of studies have stressed the importance of the systemic inflammatory response in CRC development and the search for variables involving its components as a valuable tool to drive prognosis [15,34]. Important prognostic records have been January 15, 2022 Volume 14 Issue 1 obtained in several research works [16,35], which avail the use of blood leukocyte ratios as predictors in CRC progression after surgery. However, some studies have highlighted inherent failures to these analyses. Thus, Zhang et al [36] warn against the impact of the use of distinct factors, within different studies, to adjust possible confounders for multivariate hazard ratio determination, which can make the latter at risk of bias and heterogeneity, in turn making LMR fail to reach significance in survival. Likewise, sample size, race heterogeneity, and most of all the pre/postoperative dynamic changes in circulating leukocyte population can dramatically affect the observable effects of these variables in the multivariate models for survival progression [37]. In our correlative analyses, though all preoperative blood leukocyte ratios significantly rose at different stages, in the end we were unable to establish a predictor value for any of them, neither for RCRC nor for LCRC survival, perhaps due to a conjunction of previously discussed handicaps. Nonetheless, we do not discard the possibility for them to emerge as good predictors in the putative case those handicaps could be solved, thus improving the multivariate analyses. Notably, we report tissue leukocyte ratios, both alone and combined with preoperative blood LMR b , as six variables with a strong predictor value for RCRC overall survival (CD8 pt , CD4CD8 pt , LMR pt , CD8MR pt , LMR b /LMR pt , LMR b /CD8MR pt ), three variables for recurrence-free survival (LMR pt , CD8MR pt , LMR b /LMR pt , LMR b / CD8MR pt ), and another robust variable to predict LCRC overall survival (LMR b /CD4MR pt ). In addition, to avoid conflicts when interpreting the different survival predictors of RCRC, physician-friendly nomograms are proposed for both OS and RFS. Albeit much effort has been made in describing and associating the leukocyte content of tumour tissues with CRC survival [38], most studies have been performed on disaggregated tumour and peritumour samples, and only a few of them have attempted to measure leukocyte expression in fixed samples of these tissues to associate them with circulating ratios [19] or to correlate them with patient survival[18, Figure 4 Receiver operating curve analyses for overall survival and Kaplan-Meier curves for optimal cutoff values in right-sided colorectal cancer patients for significant predictors. A-B: CD8 + -lymphocyte (CD8) peritumour (pt) , worse below 8.25; C-D: CD4 + plus CD8 + -lymphocyte (CD4CD8) pt worse below 10.16; E-F: Lymphocyte-to-monocyte ratio (LMR) pt worse below 3.185; G-H: CD8 + -lymphocyte-to-monocyte ratio (CD8MR) pt worse below 1.65; I-J: LMR b /LMR pt worse above 0.985; K-L: LMR blood (b) / CD8 + -lymphocyte-to-monocyte ratio pt worse above 1.485; survival proportions at 26 mo after surgery (median follow-up) are shown ( a P < 0.05, b P < 0.01, log-rank test). Figure 5 Receiver operating curve analysis for overall survival (A) and Kaplan-Meier curve (B) for optimal cutoff value in left-sided colorectal cancer patients for the significant predictor blood lymphocyte-to-monocyte ratio/peritumour CD4 + -lymphocyte-to-monocyte ratio. Worse above 10.57; survival proportions at 26 mo after surgery (median follow-up) are shown ( a P < 0.05, log-rank test). b: Blood; CD4MR: CD4 + -lymphocyteto-monocyte ratio; LMR: Lymphocyte-to-monocyte ratio; pt: Peritumour. 39]. Hence, this could be the first study in which leukocyte measures in both blood and fixed tissues are put together into predictor indexes for CRC survival. It is worth noting that, in addition to the well-established predictor value of blood leukocyte ratios, the 10 indexes involve leukocyte concentrations in peritumoural zones of the bowel but not in the tumour mass. A peritumour constitutes an easily obtainable tissue during a preoperative exploration of the patient (this could be the colonoscopy), which might be safely biopsied without affecting the tumour environment in an adenoma-like surgical extraction protocol. Therefore, on a routine basis, surgeons might access both preoperative peripheral blood parameters as well as non-neoplastic peritumoural tissue (without disturbing the tumour itself) and make use of the described ratios and nomograms to predict the patient's outcome after LMR/peritumour (pt) LMR worse above 0.985; E-F: Blood LMR/peritumour CD8 + -lymphocyte-to-monocyte ratio (CD8MR) worse above 1.485; survival proportions at 26 mo after surgery (median follow-up) are shown ( a P < 0.05, b P < 0.01, log-rank test). surgery. Thus, ad hoc surgical strategies can be designed to allow physicians to continue with surgery as programmed or delay the intervention until better scores are achieved after personalised treatments to correct the leukocyte levels in the patient. Altogether, these indexes could be implemented in the first line of prognosis, making it easier to predict the outcome of patients after surgery depending on the tumour location and leukocyte distribution in both peripheral blood and biopsies of the peritumoural region. Limitations Our study is mainly limited by the cohort size. It might be expected that the extension of these variables to a greater cohort would reinforce our conclusions or even make foregoing unobserved interactions surface. CONCLUSION Herein we present important remarks on the value of combining circulating leukocyte ratios and tissue infiltrated leukocyte ratios on the sustaining of valuable prognosis tools for physicians in order to stratify patients regarding their putative outcome. In Figure 7 Nomograms for predicting overall survival and recurrence-free survival after surgical intervention of right-sided colorectal cancer patients. A: The 4-year probability of overall survival was estimated by summing the scores of peritumour (pt) lymphocyte-to-monocyte ratio (LMR), CD8 + -lymphocyte (CD8) + -lymphocyte-to-monocyte ratio (CD8MR) pt , CD4 + plus CD8 + -lymphocyte (CD4CD8) pt , blood (b) LMR/LMR pt , LMR b /CD8MR pt , and CD8 pt ; B: The 4-year probability of recurrence-free survival was estimated by summing the scores of LMR pt , LMR b /LMR pt , and LMR b /CD8 + -lymphocyte-to-monocyte ratio pt . For each graph, locate the patient's values for each variable at one of the extremes of its corresponding axis, taking into account the correct position with respect to the optimal cutoff that is indicated; values higher than the cutoff go to the upper end and values lower than the cutoff go to the lower end. Then, draw a line straight upwards to the "Points" axis to determine the score associated to each variable. Add up all the scores, locate this sum in the "Total points" axis and draw a line straight down to the lowest axes of "4-year overall survival" or "4-year recurrence-free survival" to find the predictive probability of the patient for overall survival or recurrence-free survival outcome, respectively. the era of personalised medicine, such indexes will provide benefits to improving both resources and well-being of CRC patients after surgery. Research background Colorectal cancer (CRC) points to 9.4% of cancer deaths worldwide, ranking second after lung cancer. Despite the wide variety of factors tested to predict their outcome, most patients with similar variables show big differences in survival. Moreover, rightsided CRC (RCRC) and left-sided CRC (LCRC) patients exhibit large differences in outcome after surgical intervention as assessed by preoperative blood leukocyte ratios [today, the most extended parameters used to assess a patient's overall survival (OS) and recurrence-free survival (RFS) after surgery]. However, few efforts have been January 15, 2022 Volume 14 Issue 1 made to link tumour infiltrated leukocyte ratios to patient outcomes. Research motivation To determine whether both RCRC and LCRC patient outcomes could be accurately predicted based on the counting of infiltrated leukocytes in tumour and peritumoural tissues. Research objectives The aim of this study was to find stronger indexes than circulating (blood) leukocyte ratios to predict RCRC and LCRC patient outcomes. Research methods A prospective study was performed with CRC patients who had undergone surgical intervention to resect the tumours. Leukocyte concentrations in peripheral blood, tumour, and non-neoplastic peritumoural tissues were determined. Ratios of these parameters were evaluated as predictors for OS and RFS using Spearman correlations, Cox univariate and multivariate proportional hazards regression followed by the calculation of the receiver-operating characteristic and area under the curve (AUC) and the determination of Youden's optimal cutoff values for those variables that significantly correlated with either RCRC or LCRC patient outcomes. Clinicianfriendly nomograms were developed to predict OS and RFS from the prediction indexes. The accuracy of the model was evaluated using calibration and validation analyses.
2022-01-12T16:15:14.873Z
2022-01-15T00:00:00.000
{ "year": 2022, "sha1": "b2ed16cbdbb8564d74129d303c169baabacf65cf", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4251/wjgo.v14.i1.295", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be0786744d4b9372ec3476955588da89e9fe0a9a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234799666
pes2o/s2orc
v3-fos-license
Genetic programming for hydrological applications: to model or to forecast that is the question Genetic programming (GP) is a widely used machine learning (ML) algorithm that has been applied in water resources science and engineering since its conception in the early 1990s. However, similar to other ML applications, the GP algorithm is often used as a data fitting tool rather than as a model building instrument. We find this a gross underutilization of the GP capabilities. The most unique and distinct feature of GP that makes it distinctly different from the rest of ML techniques is its capability to produce explicit mathematical relationships between input and output variables. In the context of theory-guided data science (TGDS) which recently emerged as a new paradigm in ML with the main goal of blending the existing body of knowledge with ML techniques to induce physically sound models. Hence, TGDS has evolved into a popular data science paradigm, especially in scientific disciplines including water resources. Following these ideas, in our prior work, we developed two hydrologically informed rainfall-runoff model induction toolkits for lumped modelling and distributed modelling based on GP. In the current work, the two toolkits are applied using a different hydrological model building library. Here, the model building blocks are derived from the Sugawara TANK model template which represents the elements of hydrological knowledge. Results are compared against the traditional GP approach and suggest that GP as a rainfall-runoff model induction toolkit preserves the prediction power of the traditional GP short-term forecasting approach while benefiting to better understand the catchment runoff dynamics through the readily interpretable induced models. INTRODUCTION Data science methods have shown success in many scientific fields including hydrology. However, it may be argued that, compared with the level of success in commercial fields, the potential in water resources has not been fully realized. There are two major reasons for this: a lack of the availability of labelled instances for model training and the black-box nature of data-driven models where a modeller has little or no knowledge about how the model makes its prediction (Karpatne et al. ). Although the datadriven models are often more accurate than more traditional hydrological physics-based, conceptual and empirical models in terms of predictive capabilities, they contribute little towards the advancement of scientific theories due to the lack of interpretability of the model configurations. Recently, a novel modelling paradigm called theory-guided data science (TGDS) (Karpatne et al. ) Genetic programming (GP) (Koza ) is an ML algorithm that has been used for many applications in water resources science and engineering since its invention in the early 1990s. However, as per the state-of-the-art GP applications, the algorithm is often used as a data fitting tool instead of as a model building instrument. The data fitting makes the GP applications very similar to other ML algorithms, such as artificial neural networks or support vector machines. In hydrological rainfall-runoff modelling, the most frequent use of ML, including GP, is as a short-term forecasting tool. We find this to be an underutilization of the GP capabilities. The most unique and distinct feature of GP that makes it so different from the rest of ML techniques lies in its capability to produce explicit mathematical relationships between input and output variables. A recent paper (Addor & Melsen ), based on more than 1,500 peer-reviewed research articles, concluded that the model selection in hydrological modelling is more often driven by legacy rather than adequacy. Furthermore, the so-called uniqueness of the place has been identified as one of the key aspects of hydrological modelling (Beven ). Hence, an automatic model induction and model selection framework may serve as an alternative to the more traditional subjective model selection and should induce a model architecture that might be more adequate for the intended application. HYDROLOGICAL MODELLING Hydrological models play an important role in understanding catchment dynamics. In this section, different hydrological model-building strategies, some of which have been used in the present study are briefly discussed. Theory-based models vs. data science models Here, we refer to both conceptual and physics-based hydrological models as theory-based models. In conceptual modelling, hydrological processes are represented mathematically with the reservoir units describing the catchment storages. Fluxes, reservoirs, closure relations, and transfer functions are the main building blocks of a conceptual model. Conceptual models are several orders less complex than the physics-based models due to the conceptual representation of catchment dynamics instead of small-scale physics used in physics-based models. However, in conceptual models, there are little or no direct relations between the model parameters and physically measurable quantities in the field. Therefore, it is required to use calibration schemes to select appropriate parameter values which provide a reasonable match between observed and modelsimulated values. Two types of modelling approaches can be identified within the conceptual modelling: the models based on a single hypothesis (fixed models) and the models based on multiple hypotheses (flexible models). Fixed models are built around a general model architecture that gives satisfactory model performances over a fairly broad range of watersheds and climatic regions. On the other hand, flexible modelling frameworks provide model building blocks that can be arranged in different ways to test many hypotheses about catchment dynamics instead of the one fixed hypothesis in fixed models. Such robust nature of any modular modelling framework allows the modeller to consider the uniqueness of the area of their application. Data science models, such as ML, gained popularity in many disciplines including hydrology with the advancement of computing power and data availability. The main advantage of data science models is their ability to use the available data to build the input-output relationships which provide actionable models with good predictive capabilities without relying on scientific theories. The short lead time predictive power of ML models is often superior to that of traditional physics-based and conceptual hydrological models due to their ability to capture the non-linearity, non-stationarity, noise complexity and dynamism of data (Yaseen et al. ). Another major advantage of an ML model is that it requires less effort to develop and calibrate than a physics-based model. Data science models, such as deep learning (DL) models, have shown more accurate performances in hydrograph predictions than the traditional approaches in ungauged catchments (Kratzert et al. ). Furthermore, ML models can capture watershed similarities by providing satisfactory results even for the catchments that were not used for the training of those models. This implies the potential of ML models in evolving catchment scale theories in which the traditional models were unable to do so well (Nearing et al. ). Theory-based models are frequently admired by the community due to their interpretability which may lead to a better understanding of catchment dynamics. The simple application of data-driven techniques may produce models with high prediction accuracy yet with meaningless model configurations which do not satisfy basic hydrological understanding and may have serious complications with the explanation. The black-box nature of data science models has been criticized, leading to a few of their applications and hindered success in many scientific fields. Beven () highlights the importance of the interpretability of DL models and suggests a more direct incorporation of process information into such models. Furthermore, he points out that ML models should also need to pay attention to similar issues associated with traditional modelling approaches like data and parameter uncertainties and equifinality. Nearing et al. () contend that there is a danger for the hydrologic community in not recognizing the potential of ML offers for the future of hydrological modelling. Furthermore, the authors reject the most common criticism on ML models (the lack of explainability) by stating that the adequacy of process representation in theory-based models can be questioned due to the poor prediction accuracy. In summary, despite having a huge potential, the modern ML capabilities have not been thoroughly tested in hydrological modelling where there is an expectation that even distributed hydrological models are to be developed principally on ML in near future. Even though the new data science techniques, such as DL, have become indispensable tools in many disciplines, their applications in water resources remain quite limited (Shen et al. ). Lumped models vs. distributed models vs. semi-distributed models Hydrological models can be broadly categorized into lumped and distributed models based on whether a model considers the spatial heterogeneity of watershed properties (e.g. geology, topography, land use) and climate variables (e.g. rainfall distribution). Lumped models assume spatially uniform catchment characteristics and use catchment average climate variable values (e.g. precipitation, temperature, evaporation) as model inputs. In contrast, distributed models incorporate the spatial variability in watershed properties and climate variables into the modelling process. Model parsimony due to relatively few model parameters, ease and simplicity of use have made lumped modelling a popular hydrological modelling approach. However, the meaningfulness of lumped values may deteriorate, especially when the catchment size increases, and hence, the lumped representation may not be correctly corresponding to the actual physical reality of the catchment. A lumped model may still produce accurate results for a large catchment where the spatial heterogeneities are significant, but the inferences made through the model may not be reasonable or realistic. The so-called uniqueness of the place has been identified as one of the crucial factors in hydrological modelling (Beven ). One way of addressing this phenomenon is using distributed models as they consider the spatial variabilities in their modelling process. On top of that, distributed models are required in situations where the discharge forecasting is required at multiple points within the catchment, and the impacts of the change of land use patterns within the watershed are to be tested, and critical source detections are required for contaminant or sediment transport modelling (Fenicia et al. ). Two subcategories can be identified within distributed modelling: fully distributed and semi-distributed models. Fully distributed models discretize the watershed into regular or irregular grids and use small-scale physics to model the fluxes through the spatial elements. In the beginning, its usage was greatly limited due to computation demand and intensive data requirements. However, researchers believed that more data about the catchment properties and climate variables would be available with the advancement of technology and invested heavily on developing fully distributed models. Until today, however, the distributed models have not reached the expected outcome. GP APPLICATIONS IN WATER RESOURCES ENGINEERING Over the past three decades, ML has evolved into an irreplaceable tool in many commercial and scientific disciplines, including water resources engineering. The approach outperforms the traditional approaches in many applications, such as autonomous driving, language translation, character recognition and object tracking (Karpatne et al. ). ML techniques received attention among hydrologists to a great extent during the last 20 years. The majority of ML methods used in hydrological modelling can be cate- The earliest GP application in water resources engineering was reported in the late 1990s (Babovic ). Since then GP has been used in many research directions in water resources engineering, such as rainfall-runoff modelling suggest five ways of combining scientific knowledge with data science: (i) theory-guided design of data science models, (ii) theory-guided learning of data science models, (iii) theory-guided refinement of data science outputs, (iv) learning hybrid models of theory and data science and (v) augmenting theory-based models using data science. A typical theory-guided machine learning approach may follow one or more above-named strategies to couple scientific knowledge with learning algorithms. GP FOR HYDROLOGICAL FORECASTING In the traditional setting, when GP is used for hydrological forecasting, the terminal set of GP algorithm consists of past and current states of meteorological variables and randomly generated constants. The function set consists of basic mathematical functions. The GP algorithm uses these building blocks to generate its solution set for predicting catchment response into the future. Runoff forecasting For example, to produce one-day (Q tþ1 ) and five-day (Q tþ5 ) runoff forecasts, one would use precipitation (P t ), evaporation (E t ) and antecedent runoff (Q t ) as the input variables. Furthermore, lagged precipitation (P tÀlag ) and evaporation (E tÀlag ) vectors up to a certain period (lag) may also be used to incorporate precipitation and evaporation history into the forecasting process (in the current study, lag ¼ 5 days). Although the runoff history can also be included in the forecasting process, in this study, no lagged runoff vectors are used as the forecasted runoff is largely correlated with the last observed runoff value (Q t ). The expected relationship between the forecasted runoff and input variables can be captured as in Equation (1). GP may create its candidate equations either using one or more input vectors along with the random constants and mathematical functions. GP settings used for the runoff forecasts are given in Table 1. Case studies The catchment characteristics of both watersheds are summarized in Table 2. Precipitation (P), Streamflow (Q) and Evaporation (E) data for 11 years from 1 January 2004 to σ is the standard deviation, μ is the mean, Q ot is the mean of observed discharge values and ln is the natural logarithm). They are responsive to contrasting flow segments of simulated and observed hydrographs. Results Four equations were derived by using GP as a forecasting tool for one-day and five-day runoff prediction of the two catchments. Equations (6) and (7) are for the Sipsey Fork catchment and Equations (8) and (9) are for the Red Creek catchment. The performance matrices of the four equations are given in Table 3. Figure 3 demonstrates the forecasted hydrographs with the observed hydrographs of the catchments. • One-day runoff forecasts of the Red Creek catchment achieve high-efficiency values for both the training and testing periods under all four objective functions. This demonstrates the ability of GP to achieve high-efficiency values in runoff forecasting. Furthermore, the optimal equation (Equation (8)) shows no signs of overfitting due to its consistent performance across both the training and testing periods. • As it can be seen with observed hydrographs, the runoff signature of the Sipsey Fork catchment is relatively more complex than that of the Red Creek catchment. In this context, GP with only mathematical functions is unable to achieve satisfactory performance in terms of the training NSE value. Additionally, the optimal (6)) is overfitted to its training data as the NSE value falls below zero for the testing period. This highlights the importance of cautious usage of traditional GP forecasts when they are used with out of sample values. • As per the efficiency values, one-day forecasts of both catchments perform significantly better than five-day forecasts. Therefore, the performance of GP as a forecasting tool deteriorates significantly as the lead time increases. • The optimal equations provide little or no knowledge about the underlying physical phenomena of runoff generation. Furthermore, the equations tend to become lengthier with insignificant components. GP FOR HYDROLOGICAL MODELLING In this section, GP is used for automatic rainfall-runoff model induction. In contrast to runoff forecasting, GP here maps the input forcing variables at the time t to predict the runoff at the time t which involves no lead time. RB function RB function represents a single reservoir in a TANK model configuration. As shown in Equation (10), RB has nine function arguments. Q ¼ RB(RI, h 1 , a 1 , h 2 , a 2 , b, L 1 , L 2 , S) where Q is the discharge components (two side flows and one bottom flow) of the reservoir (mm/day); RI is all input RJOIN function RJOIN function (Equation (11) Table 5. Optimal model selection was carried out as described in earlier sections, while 5,000 behavioural models (parameter sets) were identified for uncertainty analysis using NSE as the performance indicator with a threshold value of 0.6. Results Lumped modelling The performance matrices of the optimal model are given in Table 6. As per the efficiency values, the optimal model shows a consistent performance over calibration, validation and testing periods. Hence, it shows no sign of overfitting issues to its training data. The simulated hydrograph along with the observed hydrograph of the Sipsey Fork catchment is given in Figure 9 and a good visual match between the two hydrographs can be observed. However, on some occasions, the optimal model underestimates high discharge values. The simulated FDC (Figure 10 The calibrated values of the optimal model reveal that almost the entire surface runoff consists of three surface runoff components: q sur2 of RB 1 , q sur1 and q sur2 of RB 7 . The associated time delays of their lag functions are 0.04, 0.05 and 0.01 days, respectively. This suggests that the optimal model has a quick discharge response to its forcing terms. Interestingly, the experimental insights of the Sipsey Fork catchment indicate that the most distinct feature of the catchment is its rapid flow response to the precipitation events. FORECASTING VS. MODELLING The most common application of GP in rainfall-runoff modelling so far is based on the use of the algorithm as a shortterm forecasting tool. As shown in this and many previous studies, GP is capable of achieving high-efficiency values when it is used as a forecasting tool. However, in this study, we explore the potential of GP as a model induction toolkit. In contrast to the traditional GP as a forecasting tool, the two model induction toolkits (ML-RR-MI and MIKA-SHA) consist of the following augmentations when they are used as model induction engines. • GP is used to simulate the runoff at the time t using the forcing terms at the time t which involves no lead times. The precipitation, evaporation and streamflow histories are automatically taken into account by the reservoir storage and lag functions. • Existing hydrological knowledge is incorporated as special functions (RB, RJOIN and DISTRIBUTED) into the GP function set. The objective of adding special functions is to make the induced models readily interpretable and to capture the complex runoff dynamics. • Input variable data are divided into three segments (calibration, validation and testing) and the performance of the validation period is also considered in the optimal model selection stage. This is used to avoid the selection of overfitted models as the optimal model. • A multi-objective optimization scheme is used to optimize both model structure and parameters simultaneously. Models selected using single-objective optimization may be biased towards a specific flow segment. For example, the popular NSE is sensitive towards the high flows. Hence, a model derived through a multi-objective optimization is expected to perform well in many flow characteristics. • Optimal model selection is not only based on the best training fitness but also based on validation fitness, relative performance in capturing observed FDC, model parsimony, time-series complexity and pattern matching. • Induced models are readily interpretable and provide insights about catchment dynamics. • Parallel computation is used at the fitness calculation stage. As the special functions require considerably longer computational times than the basic mathematical functions, the use of parallel computation greatly helps to reduce overall computational time. As it can be seen with the Red Creek catchment, GP as a model induction toolkit can achieve the same highefficiency values similar to runoff forecasts (one-day forecasts). More importantly, when the runoff signature is much more complex like with the Sipsey Fork catchment, the model induction toolkit was able to achieve satisfactory prediction power which runoff forecasts through traditional GP were unable to achieve. In the present study, a relatively shorter period (6 years) was used for the model training (calibration). However, both model induction toolkits were able to capture the runoff signature reasonably well within that period. Obtaining a long quality dataset for model training is often challenging in hydrological modelling. Therefore, it may be safe to assume GP as a model induction toolkit performs better than forecasting when the discharge signatures are more complex and difficult to model and/or when the training labelled instances are limited. CONCLUSIONS We recognize the potential offered by ML algorithms towards hydrological modelling. We also recognize that simplistic black-box type data-driven models may lead to the evolution of accurate; yet, senseless models with serious difficulties with interpretation may not serve towards the advancement of hydrological knowledge. Therefore, we chart the most promising way ahead through the integration of existing hydrological knowledge with learning algorithms to induce more generalizable and physically coherent models. This was the motivation behind the development of GP-based model induction frameworks which have been founded on both ML and theory-driven models. Therefore, we expect this work will strengthen the link between two major, historically, quite separate communities in water resource science and engineering: those working with physics-based process simulation modelling and those working with machine learning. Results demonstrated in this contribution the potential of GP as a model induction toolkit in contrast to its most frequent usage as a short-term forecasting tool. More importantly, GP as a model induction engine preserves its high prediction accuracies shown with runoff forecasting while benefiting hydrologists to gain a better understanding of the watershed dynamics through the resultant interpretable models. Furthermore, GP as a model induction toolkit produces more accurate results than forecasting when the discharge signatures are more complex and difficult to model and/or when the training labelled instances are limited. Furthermore, in the current study, GP is used to its full capacity as a model induction engine to simultaneously optimize both model configuration and parameters. based on adequacy rather than legacy. Therefore, the modular approach of these model induction toolkits may address the so-called uniqueness of the place in hydrological modelling. The unique perhaps more important feature of these model induction toolkits would be their capability to couple with any internally coherent collection of building blocks representing the elements of hydrological knowledge. The optimal model configurations derived in this study are in agreement with experimental insights and previously disclosed research findings of the catchments. Hence, GP as a model induction toolkit is smart enough to mine knowledge from data which makes it viable to depend on the induced models with more than just statistical confidence. The optimal models derived by the two GP-based model induction toolkits are readily interpretable by domain experts. This makes the approach introduced here different from the other ML utilizations in rainfall-runoff modelling. We need data to build models. In the absence of data, human experts will be able to construct models based on their intuition since even human-built models require some sort of validation against data. With the limited data, both human and our model induction toolkits (both ML-RR-MI and MIKA-SHA) face the same challenge. DL models will not be useful in such a situation due to the very large number of free parameters. From the perspective of the bias-variance dilemma, DL models, such as Long Short-Term Memory networks (LSTM), will not be particularly useful under the poor data availability circumstances. However, the present work is just the beginning of coupling the strengths of human awareness with those of computational discovery techniques. We expect further research studies on theory-guided machine learning to be directed towards knowledge discovery and automatic model building in hydrological modelling. DATA AVAILABILITY STATEMENT All relevant data are included in the paper or its Supplementary Information.
2021-05-21T16:57:13.390Z
2021-04-12T00:00:00.000
{ "year": 2021, "sha1": "7d2d4a8855dcff43bccfe682d43f15c794c765fb", "oa_license": "CCBY", "oa_url": "https://iwaponline.com/jh/article-pdf/23/4/740/910365/jh0230740.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1a89346c764b8a23fbd5cf058319da5b1e98ace5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
73728019
pes2o/s2orc
v3-fos-license
Total Synthesis of (−)-Ambiguine P Described is a concise total synthesis of (-)-ambiguine P, a cycloheptane-containing member of the hapalindole alkaloids. The challenging pentacyclic framework of the natural product was assembled rapidly via a [4 + 3] cycloaddition reaction-inspired strategy, and the tertiary hydroxy group was introduced by an NBS-mediated bromination-nucleophilic substitution sequence. Enol triflate 17. Ketone 16 was prepared according to the procedure reported by Baran and coworkers. 4 The starting material for Baran's procedure, (4S,8R/S)-(−)-p-menth-1-en-9-ol (CAS# 937035-21-7), was prepared by hydroboration/oxidation of (S)-(−)-limonene. 5 A 250 mL round-bottomed flask with a magnetic stir bar was charged with ketone 16 (2.41 g, 13.5 mmol, 1 equiv.) and THF (48 mL). The colorless solution was cooled to −78 °C. KHMDS (1.0 M in THF, 17.6 mL, 17.6 mmol, 1.3 equiv.) was added dropwise, and the resulting yellow mixture was stirred at −78 °C for 1.5 h. Comins' reagent (8.50 g, 21.7 mmol, 1.6 equiv.) was transferred in using a total of 20 mL THF, and stirring was continued for another 1.5 h. The reaction was quenched with 20 mL saturated aqueous NH4Cl, and allowed to warm up ambient temperature. The reaction mixture was partitioned between 200 mL 1:1 brine:H2O and 100 mL 1:1 Et2O:hexanes, and the aqueous layer was extracted with 70 mL 1:1 Et2O:hexanes. The combined organic layers were washed with brine, dried over Na2SO4, and concentrated under reduce pressure. The crude material was purified by flash chromatography (0:100 → 1:99 EtOAc:hexanes) to afford enol triflate 17 (3.74 g, 12.1 mmol, 89% yield) as a pale yellow oil. Enone 18. (Procedure adapted from Stoltz et al.) 6 A 500 mL round-bottomed flask with a magnetic stir bar was charged with enol triflate 17 (3.66 g, 11.8 mmol, 1 equiv.) and DMF (18 mL). CuI (225 mg, 1.18 mmol, 0.1 equiv.), Pd(dppf)Cl2•DCM (963 mg, 1.18 mmol, 0.1 equiv.), and LiCl (flame-dried before use; 2.65 g, 62.5 mmol, 5.3 equiv.) were added sequentially. Another 100 mL DMF was cannulated in, and tributyl(1-ethoxyvinyl)tin (5.98 mL, 17.7 mmol, 1.5 equiv.) was added. The mixture was frozen in a liquid N2 bath and degassed by three freeze-pump-thaw cycles. The reaction mixture was then placed in a pre-heated 40 °C oil bath. After 14 h, the reaction mixture was cooled to ambient temperature and diluted with 200 mL H2O. The mixture was then partitioned between 30 mL H2O and 100 mL 1:1 Et2O:hexanes, and the aqueous layer was extracted with 1:1 Et2O:hexanes (100 mL  2). The combined organic layers were washed with 1 N HCl ( 2), H2O ( 2), and brine ( 2), dried over Na2SO4, and concentrated under reduced pressure. The crude ethyl enol ether was dissolved in 150 mL DCM, and 75 mL 2 N HCl was added. The reaction mixture was stirred vigorously at ambient temperature for 1.75 h. The mixture was then partitioned between 50 mL H2O and 60 mL DCM, and the aqueous layer was extracted with 70 mL DCM. The combined organic layer was washed with saturated aqueous NaHCO3, dried over Na2SO4, and concentrated under reduced pressure. The resulting crude material was filtered through a plug of silica to remove residual solids using 25:75 Et2O:hexanes as the eluent, and then concentrated under reduced pressure. The crude mixture was purified by flash chromatography ( (2.08 g, 10.2 mmol, 1 equiv.) was added slowly over 25 min using a total of 15 mL THF. The reaction mixture was stirred at −78 °C for 1.5 h, and diluted with 150 mL hexanes. The reaction mixture was allowed to warm to ambient temperature, and filtered through a pad of Celite using 20:80 Et2O:hexanes as the eluent. The mixture was concentrated under reduced pressure to provide the crude TBS enol ether 19 as a yellow oil, which was carried forward without further purification. Tricycle 23. TBS enol ether 19 obtained in the previous step was divided evenly into three portions, and three identical reactions were performed. After all the reactions were quenched, they were combined, and subjected to further work-up and purification. Each of the three 50 mL recovery flask with a magnetic stir bar was charged with alcohol 11 (307 mg, 1.75 mmol, 1 equiv.) and DCM (5.5 mL). TBS enol ether 19 (3.40 mmol, 1.94 equiv.) was added as a solution in 12 mL DCM. The resulting yellow solution was cooled to −78 °C. TMSOTf (distilled over P2O5 before use; 0.332 mL, 1.84 mmol, 1.05 equiv.) was added dropwise, and the mixture turned deep red. After 35 min, the reaction was quenched with 10 mL saturated aqueous NaHCO3, and allowed to warm to ambient temperature. The three reaction mixtures were combined and partitioned between 50 mL 1:1 saturated aqueous NaHCO3:H2O and 30 mL DCM. The aqueous layer was extracted with 15 mL DCM, and the combined organic layers were dried over Na2SO4 and concentrated under reduced pressure. Purification by flash chromatography (6:94 → 7:93 EtOAc:hexanes) provided tricycle 23 (1.075 g, 2.97 mmol, 57% yield) as a yellow gel. Enone 18, which was generated from hydrolysis of unreacted 19, was recovered and reused. Enone 26. A 100 mL recovery flask with a magnetic stir bar was charged with pentacycle 24 (390 mg, 1.08 mmol, 1 equiv.) and THF (18.9 mL) to afford a yellow solution. H2O (2.1 mL) was added, and the yellow solution was cooled to 0 °C. DDQ (1.71 g, 7.54 mmol, 7 equiv.) was added in two portions, and the mixture turned deep red. After 5 min, the ice/water bath was removed, and the deep red solution was stirred at ambient temperature for 2 h. The reaction mixture was quenched with 50 mL 2.5 N NaOH. The mixture was partitioned between 50 mL 1:1 2.5 N NaOH:H2O and 60 mL EtOAc, and the aqueous layer [α] 23.6 D = +63.4 (c = 0.222, CHCl3). N-Boc Diene 29. A 25 mL recovery flask with a magnetic stir bar was charged with allylic alcohol 28 (77.7 mg, 0.168 mmol, 1 equiv.) and DCM (6.7 mL). The resulting colorless solution was cooled to 0 °C. Without weighing, Martin sulfurane (ca. 200 mg, 0.29 mmol, 1.7 equiv.) was added, and the mixture turned yellow. After 40 min stirring at 0 °C, the reaction mixture was diluted with 6 mL saturated aqueous NaHCO3, and allowed to warm to ambient temperature. Then the mixture was partitioned between 20 mL 1:1 saturated aqueous NaHCO3:H2O and 20 mL DCM, and the aqueous layer was extracted with 15 mL DCM. The combined organic layers were dried over Na2SO4, and concentrated under reduced pressure. The crude material was purified by flash chromatography (3:97 → 4:96 EtOAc:hexanes) to afford N-Boc diene 29 (72.1 mg) as a white solid, containing an impurity that was removed after the next step. Table 1. Specific details for structure refinement: All atoms were refined with anisotropic thermal parameters. Hydrogen atoms were included in idealized positions for structure factor calculations except those of N-H groups which were located in the difference Fourier map and freely refined without any restraints. All structures are drawn with thermal ellipsoids at 50% probability.
2019-03-12T13:02:39.178Z
2019-03-11T00:00:00.000
{ "year": 2019, "sha1": "f1da459344fb522ad0d675b398b040f84e847cbc", "oa_license": "CCBYNC", "oa_url": "https://figshare.com/articles/Total_Synthesis_of_-Ambiguine_P/7841108/files/14601941.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "9b6e92d22ba6ac74b569cfa8c2c3b1d14e43bb15", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
231655225
pes2o/s2orc
v3-fos-license
Development of a loop-mediated isothermal amplification (LAMP) assay for the identification of the invasive wood borer Aromia bungii (Coleoptera: Cerambycidae) from frass The red-necked longhorn beetle Aromia bungii (Faldermann, 1835) (Coleoptera: Cerambycidae) is native to east Asia, where it is a major pest of cultivated and ornamental species of the genus Prunus. Morphological or molecular discrimination of adults or larval specimens is required to identify this invasive wood borer. However, recovering larval stages of the pest from trunks and branches causes extensive damage to plants and is timewasting. An alternative approach consists in applying non-invasive molecular diagnostic tools to biological traces (i.e., fecal pellets, frass). In this way, infestations in host plants can be detected without destructive methods. This paper presents a protocol based on both real-time and visual loop-mediated isothermal amplification (LAMP), using DNA of A. bungii extracted from fecal particles in larval frass. Laboratory validations demonstrated the robustness of the protocols adopted and their reliability was confirmed performing an inter-lab blind panel. The LAMP assay and the qPCR SYBR Green method using the F3/B3 LAMP external primers were equally sensitive, and both were more sensitive than the conventional PCR (sensitivity > 103 to the same starting matrix). The visual LAMP protocol, due to the relatively easy performance of the method, could be a useful tool to apply in rapid monitoring of A. bungii and in the management of its outbreaks. Introduction Aromia bungii (Faldermann, 1835) (Coleoptera: Cerambycidae), the red-necked longhorn beetle, is an important pest of fruit and ornamental plants of the genus Prunus, both in native areas of east Asia and in newly invaded areas of Europe and Japan (EFSA 2019; EPPO 2020; CABI 2020). A. bungii can infest healthy or weakened host species and complete several overlapping generations in the same tree (Ma et al. 2007). The larvae bore galleries in the trunk and main branches, causing structural weakness, dieback, and finally tree death. Biological parameters of A. bungii evaluated in the Italian population showed remarkable fertility and longevity (Russo et al. 2020). A. bungii is in the list of priority pests in the European Union (EU 2019) and quarantine measures have been applied in Germany and Italy to eradicate this invasive pest (Hörren 2016) or to contain the risk of further outbreaks (Carella 2019). These quarantine measures can have a strong impact on nurseries and farmers. Early detection supported by rapid diagnostic protocols can help to identify the presence of A. bungii on plants irrespectively of the developmental stage of the pest, so that the efficacy of the phytosanitary monitoring in the field and at points of entry is enhanced. In the latter case, possible import delays can be avoided (Blaser et al. 2018;Poland and Rassati 2019). The high specificity and sensitivity of DNA-based technologies allows the detection of harmful organisms even at low concentrations of DNA extracted from plant tissues (Aglietti et al. 2019;Rizzo et al. 2020a). Among the most versatile, sensitive and specific methods, loop-mediated isothermal amplification (LAMP) can be used as a fieldfriendly and cost-effective diagnostic tool (Notomi et al. 2000(Notomi et al. , 2015. Several LAMP tests have been used both in the field and in laboratories, in particular for human and animal diseases (Lucchi et al. 2010), in food safety controls (Abdulmawjood et al. 2014), as well as in identifying plant pathogens (Aglietti et al. 2019;Luchi et al. 2020;Blaser et al. 2018) and invasive insect pests (Huang et al. 2009;Hsieh et al. 2012;Fekrat et al. 2015;Przybylska et al. 2015;Ide et al. 2016a, b;Blaser et al. 2018;Sabahi et al. 2018;Rizzo et al. 2020b). LAMP is a highly specific and robust identification method for species with previously known DNA or RNA sequences and suitable for on-site application because it can be performed in a laboratory-free environment after minimal training (Kogovšek et al. 2015). This paper presents a reliable and sensitive diagnostic test for the rapid diagnosis of A. bungii frass using the LAMP technique. The quality of this method is compared to the conventional PCR end point method and a qPCR protocol recently developed for the identification of A. bungii from frass (Rizzo et al. 2020a). Biological samples The target samples included adults, larvae, and frass of A. bungii. Adults and larval specimens were supplied by the Department of Agricultural Sciences of the University of Naples "Federico II" and the Plant Health Service of the Campania region. In some farms situated in the pest outbreak area around Naples (Campania, Italy), where A. bungii is considered as established (Carella 2019), frass samples ( Fig. 1) were collected at the trunk base of Prunus plants and individually labeled as in Rizzo et al. (2020a). The non-target samples consisted of a set of DNA samples from the entomological biomolecular collection of the phytopathological laboratory of the Phytosanitary Service of the Tuscany Region. The non-target DNA samples were listed in a previous paper (Rizzo et al. 2020a) and included a total of 62 samples belonging to 26 species. They were used for testing the diagnostic specificity of the protocols. The non-target samples included, depending on the species, adults and/or larval specimens and frass samples in the case of some xylophagous species. Among the non-target species, a subset of six xylophagous species producing frass (Anoplophora chinensis (Forster), An. glabripennis (Motschulsky), Cerambyx cerdo Linnaeus, Cossus cossus Linnaeus, Sesia sp. Fabricius, and Zeuzera pyrina Linnaeus) was chosen and DNA was extracted de novo from their frass for this study. These DNA samples will be hereafter be referred to as nontarget frass samples. DNA extraction The DNA extraction procedure was the same for real-time and visual LAMP protocols but had some changes in relation to the matrix (frass or larvae/adults). The extraction was carried out on A. bungii frass and larvae or adults following the CTAB extraction method suggested in Li et al. (2008) with slight modifications. Specifically, in the extraction from insect frass, about 1 g of matrix was homogenized in a 10-mL stainless steel grinding jar along with a Tissue-Lyzer (Qiagen, Hilden, Germany) for 10 s at 2000 opm. Each larva/adult was ground and homogenized individually using nylon mesh U-shaped bags (Bioreba, Reinach, Switzerland). Variable volumes (10 mL for insect frass and 1 mL for larvae) of 2% CTAB buffer (2% CTAB, 1% PVP-40, 100 mM Tris-HCl, pH 8.0, 1.4 M NaCl, 20 mM EDTA, and 1% sodium metabisulfite) were added immediately after grinding. A volume of 0.5-1 mL of lysate was then incubated at 65 °C for 10 min, 1 volume of chloroform was added, stirred by inversion and TissueLyzer centrifuged at 13,000 rpm for 10 min. An aliquot of 600 µL was then taken from the supernatant and an equal volume of isopropanol was inserted, mixed by inversion and centrifuged at 13,000 rpm for 5 min. The resulting pellet was dried by speed vacuum (Eppendorf, Milan, Italy) for 5 min, then resuspended in 100 µL of sterile, ultra-pure water and incubated at 65 °C for 5 min and used for LAMP/qPCR/conventional PCR reactions immediately or stored at − 20 °C until use. This extraction protocol was used on A. bungii samples (larvae and frass) and non-target frass samples in triplicate. The amount of DNA (ng/μL) and the A 260/280 ratios were evaluated for each sample using the QIAxpert spectrophotometer (Qiagen, Hilden, Germany). To detect biological traces of insects (feces, etc.) in the frass samples, the quality of the extracted DNA was estimated using a dual-labeled qPCR targeting a highly conserved region of the 18S rDNA (Ioos et al. 2009). LAMP reaction targeting the cytochrome c oxidase subunit I (COI) gene was also performed on frass samples to assess the amplifiability of the extracted DNA from wood (Tomlinson et al. 2010b). Design of A. bungii LAMP and conventional PCR end point primers In the LAMP reaction, six primers (F3/B3, FIP/BIP and LoopF/LoopB) were designed to specifically target a fragment of the cytochrome oxidase subunit I (COI) gene of A. bungii (accession n. KF737790). The primers were designed using the LAMP Designer software (OptiGene Limited, Horsham, UK) and synthesized by Eurofins Genomics (Ebersberg, Germany). The sequences of the primers are shown in Table 1. The specificity of the primers was further tested using BLAST ® (Basic Local Alignment Search Tool: http://www. ncbi.nlm.nih.gov/BLAST ; Altschul et al. 1990). A. bungii LAMP homologous sequences were downloaded from Gen-Bank and used for alignments to test the in silico specificity of the designed primers. The alignments were performed using the MAFFT software implemented in Geneious 10.2.6 (Kearse et al. 2012), set with the default parameters (Fig. 2). To evaluate and compare the analytical sensitivity, specificity and reliability of the developed real-time and visual LAMP protocols, conventional PCR (end point) assays for the diagnosis of A. bungii were designed (Table 2) using the OligoArchitect™ Primers and Probe Online software (Sigma-Aldrich, St. Louis, USA) with the following specifications: a 100-380 bp product size, a Tm (melting temperature) of 55-65 °C, primer length of 18-26 bp, and absence of secondary structure when possible. LAMP assay and conventional PCR end point optimization Real-time LAMP. The real-time LAMP reactions were performed using the Isothermal Master Mix (ISO-001) produced by OptiGene Limited (Horsham, UK) on a CFX96 thermocycler. Each isothermal reaction was performed in duplicate, in a final volume of 20 μL and using 2 μL of DNA. Negative controls (NTC-no template control) were included for each reaction. At the end of the LAMP reactions, a melting curve was generated by increasing the temperature from 65 to 95 °C with a 10-s interval every 0.5 °C (Abdulmawjood et al. 2014). In real-time LAMP amplification, raw data were analyzed using CFX Maestro v. 1.0 (Biorad, Berkeley, CA, USA). Real-time LAMP products were checked on a 1.7% agarose gel stained with Gel Red (Biotium, Fremont, CA, USA). The LAMP protocol optimization considered the following variables: isothermal amplification time, primer concentration and annealing temperature through a thermal gradient. Once the LAMP reaction had been optimized, the reactions were carried out using a second portable thermocycler, Genie ® II (Optigene, Ltd, Horsham, UK) to evaluate their reproducibility. The optimal reaction mix for the real-time LAMP assay consisted of 10-μL Isothermal Master Mix OptiGene (ISO-001), 0.2 μM of F3/B3, 0.4 μM of LoopF/LoopB, 0.8 μM of FIP/BIP and 2 μL of template DNA (5 ng/μL) in a final volume of 20 μL. The melting peak for A. bungii samples was 83.5 ± 0.5 °C (Fig. 3). Visual LAMP. To develop an alternative and easy-to-use protocol to detect the A. bungii DNA from collected samples, a visual LAMP approach based on the primers designed for the real-time LAMP assay was also tested. The Bst 3.0 DNA polymerase kit (New England Biolabs Ltd., UK) was used for LAMP reactions on A. bungii DNA from frass with the same six LAMP primers used in the real-time LAMP test. Hydroxynaphthol Blue (HNB) was included in the reaction mixture (Goto et al. 2009) and the color change (from purple to blue) was evaluated at the end of the reaction. To optimize the visual assay conditions, the same parameters considered for the real-time LAMP were assessed. The following reagents were optimized in their quantities and/ or concentrations: buffer, dNTPs, Betaine, MgSO4, HNB and primer concentration and Bst 3.0 DNA polymerase. The reaction was performed at 65 °C for 30 min, followed by an The 20-μL optimal visual LAMP reaction mixture consisted of 2 μL of Isothermal Buffer 10X, 0.6 mM of dNTPs, 2 mM of MgSO 4 , 0.15 mM of HNB, 0.2 M of Betaine, 0.32 U/μL of Bst 3.0 and final concentrations of the LAMP primers equal to 0.2 μM for F3/B3, 0.4 μM for LoopF/LoopB, 0.8 μM for FIP/BIP. 2 μL of DNA template (5 ng/μL) was considered. The visual LAMP protocol was carried out on A. bungii and non-target DNA from frass of An. chinensis, An. glabripennis and C. cossus (Fig. 4). Conventional PCR. The conventional PCR reactions were performed in 20-μL reaction volumes containing 1X Master Mix PerfectTaq Hot Start 5Prime (Eppendorf, Milan, Italy), 0.4-µM forward and reverse primers, and 2 μL of DNA template in a MyCycler thermocycler (Biorad, Berkeley, CA, USA). Cycling conditions consisted of 3 min at 94 °C, followed by 40 cycles of 94 °C for 30 s, annealing (see Table 2) for 30 s and 72 °C for 45 s, with a final extension step of 7 min at 72 °C. PCR products were visualized on a 1.7% Performance characteristics of the LAMP assay Sensitivity, specificity and accuracy of the real-time and visual LAMP assays were evaluated after the optimization of the LAMP protocols on DNA samples of target and nontarget species (62 samples belonging to 26 species). Samples with a time amplification value (Tamp, min:s) (Aglietti et al. 2019) greater than 30 min were not considered. In the visual LAMP, the diagnostic specificity was verified by the naked eye assessing the color change of the reaction mixture. These parameters were calculated according to the EPPO standards on diagnostics PM7/76-4 (EPPO 2017) and PM7/98-4 (EPPO 2019). Blind panel validation of the assays A blind panel test was performed on six frass samples of A. bungii, two of Anoplophora chinensis, two of An. glabripennis and two of C. cossus. The test was carried out in two different laboratories (IPSP-CNR, Sesto Fiorentino, Italy and the Laboratory of the Plant Protection Service of Tuscany, Pistoia, Italy) applying the above-mentioned LAMP (real-time and visual) protocols. All DNA samples had been diluted at a final concentration of 5 ng/µL. Samples were tested in duplicate; negative controls (NTC-no template control) were included. Based on the blind panel results, the true positives, false negatives, false positives and true negatives were evaluated according to the EPPO requirements outlined in PM7/76-4 (EPPO, 2017) and PM7/98-4 (EPPO 2019). Repeatability and reproducibility The repeatability and reproducibility tests were carried out on ten samples of A. bungii DNA extracted from frass. The intra-run variation (repeatability) and inter-run variation (reproducibility) were estimated by standard parameters, such as the average Tamp and standard deviation (SD). Ten samples in triplicate, diluted to a final concentration of 5 ng/ µL, were tested in two separate series for repeatability. The reproducibility of each protocol was assessed in the same way as carried out for the repeatability by comparing the data of two series of samples by two different operators on different days (Dhami et al. 2016;Koohkanzade et al. 2018). Limit of detection (LoD) For each methodology used in the experimental design, LoD was estimated using a tenfold 1:4 serial dilution using an "artificial" frass DNA (100 ng/µL), obtained by adding frass of another species (An. glabripennis, in this case) with 10 ng/µL of A. bungii DNA from larvae. All experiments were conducted in triplicate. To evaluate the influence of the initial matrix in defining the analytical sensitivity of the method under examination, the LoD verified with pure larva DNA extract and DNA extract from A. bungii artificial frass were compared. The comparison between the LoDs of the end point PCR and LAMP protocol was carried out by electrophoretic runs in 1.7% agarose gel stained with Gel Red (Biotium Inc., Fremont, CA, USA). In parallel, a QIAxcel Capillary Electrophoresis System (QIAgen, Valencia, CA, USA) was used with the inclusion of a 25-bp DNA marker. Comparison with conventional PCR and qPCR (SYBR Green) To compare the sensitivity and performance of the assay, frass was used as the matrix with other molecular techniques, traditional end point PCR and qPCR (both hydrolysis probe and SYBR Green), performed with the parameters reported in Table 2. The F3 and B3 primers, which are "external" to the ones used in the LAMP assay, were used in both conventional PCR and in qPCR SYBR Green. Nucleic acid extractions from frass and insects The amplifiability of the DNA extracted from target and non-target frass samples (Table 3) gave satisfactory results. The Tamp average value of COX gene (LAMP protocol) was 12.3 ± 2.4 (min). The verification of amplifiability with the qPCR probe on insect extracts showed average values of Cq equal to 18.64 ± 3.6. Diagnostic sensitivity, specificity, and accuracy of the LAMP assay None of the tests carried out on target and non-target samples showed any non-specific amplification, and only A. bungii produced amplification curves. A unique peak at 83.5 ± 0.5 °C, resulting from the melting curve analysis, was visualized for each A. bungii sample, regardless of the starting matrix and confirming the specificity of the real-time LAMP assay. In the case of the visual LAMP assay, only A. bungii samples (adult, larvae, and frass) were detected by the LAMP reaction, while none of the non-target samples (62 samples) was amplified. For both protocols, diagnostic sensitivity, diagnostic specificity, and relative accuracy were 100%. The end point PCR protocols designed to compare the analytical sensibility (LoD) were also assayed on the same target and non-target samples, showing a diagnostic specificity of 100%, as in the LAMP assay developed in this study. Blind panel validation of the assay The blind panel test performed using the real-time and visual LAMP protocols showed the amplification only of the A. bungii frass samples, with a mean Tamp value equal to 18.21 ± 0.42 min in the case of real-time LAMP, whereas the non-target frass samples were not amplified. The specificity, sensitivity and accuracy of the data were 100%. In both laboratories, the results obtained with real-time and visual LAMP were the same. Only the A. bungii frass samples amplified, whereas there was no amplification of the DNA samples extracted from the frass of the xylophagous species used as comparison (non-target frass samples). Repeatability and reproducibility of the diagnostic methods In terms of repeatability, the Tamp values varied from 10.12 to 13.30 min with a mean value of 10.90 ± 1.20 min, and an average CV% of 11.04. The standard deviation (SD) of the two replicates of the same protocol ranged between 0.06 and 3.65. In terms of reproducibility, the values ranged between 0.10 and 7.24 ( Table 4). Limit of detection (LoD) of the LAMP assay and comparison with conventional PCR and qPCR The LoD was obtained both for the real-time LAMP assay and for the visual LAMP. For the real-time assay, the LoD was 0.61 pg/µL, with a Tamp value of 24.36 ± 0.90 min. For the visual LAMP assay, the LoD was the same as for the real-time LAMP assay. Table 5 compares the LoD values obtained in the different techniques. The data assigned to the PCR protocols (probe for hydrolysis and SYBR Green) (Rizzo et al. 2020a), have been omitted in this table. Figures 5 and 6 show the results of the electrophoretic runs carried out to compare the LoDs of the conventional PCR (end point) and LAMP, using 1.7% agarose gel stained with Gel Red and QIAxcel Capillary Electrophoresis System (Qiagen, Valencia, CA, USA), respectively. The comparison of the analytical sensitivity according to the starting matrix (larva and artificial frass) provided the data shown in Table 6. Discussion Molecular tools for identifying quarantine insect pests are essential for managing outbreaks, especially in view of the setup of international shared diagnostic protocols (Augustin et al. 2012). Of these molecular methods, the LAMP technique (Tani et al. 2007;Tomlinson et al. 2010a;Moradi et al. 2014;Blaser et al. 2018;Panno et al. 2020) can be used for a direct diagnosis of insect specimens (adults or larvae), as well as for an indirect analysis of insect DNA present in residues deriving from the trophic activity (e.g., frass as in Kyei-Poku et al. 2020). For frass samples, three critical points must be considered: (a) the paucity of insect DNA in these samples; (b) the presence of amplification inhibitors deriving from frass (Mitchell and Hanks 2009;Schrader et al. 2012;Strangi et al. 2013;Nagarajan et al. 2020;Rizzo et al. 2020a, b); and (c) the possibility DNA degradation over time or as an effect of frass exposition to environmental factors. We used the LAMP method on A. bungii frass. Our results show that all three issues (a-c above) were overcome. In all samples, the DNA quantity was always suitable and amplifiable for the LAMP reactions, managing the co-extraction of inhibitors from the frass samples, and with an A 260/280 ratio of between 1.8 and 2.0. However, the DNA amount extracted from adults and larvae of A. bungii was higher than in the frass samples, but with a higher variability in terms of concentration, probably related to the specimen size. Our real-time LAMP protocol on frass gave good results in terms of specificity, especially given that Aromia moschata, a native species taxonomically related to A. bungii included in the non-target species assayed, did not respond to the amplification reaction (Rizzo et al. 2020a). The protocol was also sensitive and accurate, and overall, the reaction demonstrated its robustness when the test was performed on different thermocyclers and with different operators. The repeatability and reproducibility data showed SD values with a relatively high range (Teter and Steffen 2020), of variability (presumably due to the presence of a high quantity of PCR inhibitors in frass). The use of LAMP based on a naked-eye detection system to determine the amplification result is becoming a routine approach in molecular diagnosis (Blaser et al. 2018). Our visual LAMP is a further simplification of the real-time LAMP technology as it does not require sophisticated instruments (which entail large investments, skilled personnel, and high management costs), is rapid, specific, sensitive and with a good accuracy, also compared to real-time LAMP. In addition, the limits of detection are identical to those of real-time LAMP (LoD of 0.61 pg/µL for the proposed techniques). The analytical sensitivity of the LAMP (LoD) test compared with conventional PCR (28F/489R and 51F/368R) was more sensitive (> 10 3 ) to the same starting matrix. The results show that LAMP assays and qPCR SYBR Green method (using the F3/B3 LAMP external primers) are equally sensitive, and they are more sensitive than conventional PCR. The analytical sensitivity is affected by the matrix investigated. This was clear when the LoD of a DNA extract from A. bungii larva serially diluted 1:4 (from 10 ng/L to 2.38 fg/µL) was compared with the values resulting from the LoD of the LAMP assay on A. bungii 's artificial frass. The LAMP test studied was 10 3 (from 0.61 pg/l to 9.53 fg/µL) more sensitive from the "pure" matrix of A. bungii larva than the corresponding artificial frass. These values confirm that the starting matrix is difficult to extract and amplify, but at the same time indicate the excellent performance of our LAMP assay. A comparison of the data resulting from similar studies (Rizzo et al. 2020a), clearly show the greater analytical sensitivity of our new LAMP approach. Although LAMP is a powerful method for the screening of samples and rapid responses, it may not be suitable when many validation parameters need to be estimated, as in the case of intra-or inter-lab comparisons (Panno et al. 2020). Moreover, the LAMP reaction is more prone to cross-contamination than other amplification techniques (Karami et al. 2011;Karthik et al. 2014). The rapidity (less than 2 h) of our tests and, in the case of visual LAMP, the cheapness of the proposed protocols suggest their potential in the near future for preventing or managing outbreaks of A. bungii in areas with a high risk of introduction, especially if integrated with other monitoring tools such as pheromone or allelochemical traps. A decisive enhancement for making the method simpler to apply also in the field, could be a simplification of the DNA extraction from the frass matrix using a crude extract. Conclusions The efficient management of a quarantine insect pest is based on detecting outbreaks as quickly as possible. Among the molecular methods, LAMP is a promising tool and more simple than the classical morphological approach, which requires intact samples and highly specialized skills. This is particularly true for xylophagous insects, where the sample collection is onerous in terms of time and costs, but also difficult due to the endophytic life of the preimaginal stages. Author contributions Conceptualization of the research approach, detailed laboratory methodologies, and experimental designs for this study were developed and conducted by DR (principal investigator) with the assistance of NL, DDL, and ER. All field works were carried out by RVG, APG, GC, TB, and FN. All sample preparations, data collection and statistical analyses of data collected from DNA extracts to LAMP were completed by DR, DDL, CS, NL and ER. Data curation and data mining, reference assembly and manuscript formatting were done by DR, NL, APG, FN, and ER. This paper was originally drafted by DR, DDL, ER, FN, NL, and APG. The manuscript was revised and edited by all the authors. Funding Open Access funding provided by Università di Pisa. Compliance with ethical standards Conflict of interest The authors declare no conflict of interest. Table 6 Analytical sensitivity (LoD) between 1:4 serial dilutions of Aromia bungii larva DNA extract (10 ng-2.38 fg/ µL) and 1:4 serial dilutions of Anoplophora glabripennis artificial frass extract to which 10 ng of A. bungii larva extract was added. The ± symbol in the A. bungii visual LAMP column indicates an uncertain result Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2021-01-19T15:04:57.294Z
2021-01-19T00:00:00.000
{ "year": 2021, "sha1": "3fc089418b28da80fb3198258b1208def99fd586", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13205-020-02602-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3fc089418b28da80fb3198258b1208def99fd586", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
132600562
pes2o/s2orc
v3-fos-license
Impact of climate change on hydrological conditions in a tropical West African catchment using an ensemble of climate simulations This study evaluates climate change impacts on water resources using an ensemble of six Regional Climate Models (RCMs)-Global Climate Models (GCMs) in the Dano catchment (Burkina Faso). The applied climate datasets were performed in the framework of the COordinated Regional climate Downscaling Experiment (CORDEX-Africa) project. 15 After evaluation of the historical runs of the climate models ensemble, a statistical bias correction (Empirical Quantile Mapping) was applied to daily precipitation. Temperature and bias corrected precipitation data from the ensemble of RCMs-GCMs was then used as input for the Water flow and balance Simulation Model (WaSiM) to simulate water balance components. The mean hydrological and climate variables for two periods (1971-2000 and 2021-2050) were compared to assess 20 the potential impact of climate change on water resources up to the middle of the twenty-first century under two greenhouse gas concentration scenarios, the Representative Concentration Pathways (RCPs) 4.5 and 8.5. The results indicate: (i) a clear signal of temperature increase of about 0.1 to 2.6 °C for all members of the RCMs-GCMs ensemble; (ii) high uncertainty about how the catchment precipitation will evolve over the period 2021-2050; (iii) the applied bias correction method only affected the magnitude of climate change signal (iv) individual climate 25 models results lead to opposite discharge change signals; (iv) the results for the RCMs-GCMs ensemble are too uncertain to give any clear direction for future hydrological development. Therefore, potential increase and decrease of future discharge has to be considered in climate change adaptation strategies in the catchment. The results further underline on the one hand the need for a larger ensemble of projections to properly estimate the impacts of climate change on water resources in the catchment and on the other hand the high uncertainty associated with climate 30 projections for the West African region. A water-energy budget analysis provides further insight into the behavior of the catchment. Introduction Development of adaptation strategies to deal with potential impacts of climate change on hydrological systems is a considerable challenge for water resources management (Muerth et al., 2013;Piani et al., 2010).Besides being highly exposed to climate change, the West African region presents a low adaptive capacity (IPCC, 2014).Projections for the late 21st century suggest severe consequences of climate change for water resources for the region.This includes an increased risk of water stress and flood (Sylla et al., 2015;Oyerinde et al., 2014) and significant change in river discharge (Aich et al., 2014;Ardoin-Bardin et al., 2009;Mbaye et al., 2015). Rising temperatures, commonly acknowledged by regional climate models (RCMs) and global climate models (GCMs), are expected to intensify the hydrological cycle due to an increased water holding capacity of the atmosphere, leading to an increased amount of renewable fresh-Y.Yira et al.: Impact of climate change on hydrological conditions in a tropical West African catchment water resources (Piani et al., 2010).Another consequence of temperature increase ascertained by Piani et al. (2010) for some regions is the decrease in precipitation associated with the intensification of the seasonal cycle and the frequency of extreme events.These opposite trends imply that high uncertainties are associated with predicted rising temperatures' impact on the hydrological cycle for some regions (Salack et al., 2015). Confidence in RCMs and GCMs over West Africa relies on their ability to simulate the West African monsoon (WAM) precipitation (Klein et al., 2015).However, simulating the WAM remains challenging for both RCMs and GCMs (Cook, 2008;Druyan et al., 2009;Paeth et al., 2011;Ruti et al., 2011), as each RCM and GCM produces a version of the WAM, but with some distortion of structure and/or timing.Some GCMs (e.g., CSIRO, GISS_ER, ECHAM5, CCSM) do not generate the WAM at all (Cook and Vizy, 2006).Part of this divergence is related to (i) imperfect characterization of tropical precipitation systems; (ii) uncertain future greenhouse gas forcing; (iii) scarcity of observations over West Africa; and (iv) natural climate variability (Cook and Vizy, 2006;Foley, 2010).The hydrological climate change signal is therefore unclear for the region.Several authors (Kasei, 2009;Paeth et al., 2011;Salack et al., 2015) observed diverging precipitation signals among models.Moreover, several models fail to accurately reproduce the historical rainfall onset, maxima, pattern, and amount of the region (Nikulin et al., 2012;Ardoin-Bardin et al., 2009). Despite significant advances, outputs of GCMs and RCMs are still characterized by biases that challenge their direct use in climate change impact assessment (Ehret et al., 2012).Indeed, unless the precipitation from climate models is bias corrected, results from hydrological simulations are often reported to be unrealistic and may lead to incorrect impact assessments (Johnson and Sharma, 2015;Teutschbein and Seibert, 2012;Ahmed et al., 2013).However, correction of climate model based simulation results does not ensure physical consistency (Muerth et al., 2013) and may affect the signal of climate change for specific regions as reported by Hagemann et al. (2011).Consequently, simulated hydrological variables using bias corrected data need to be explored in climate change impact assessment. There is essential consensus on the necessity of performing multi-(climate)-model assessments to estimate the response of the West African climate to global change (Sylla et al., 2015).Accordingly, several studies (e.g., Chen et al., 2013;Zhang et al., 2011) emphasize the importance of using multiple climate models to account for uncertainty when assessing climate change impacts on water resources.Taking advantage of the results of the COordinated Regional climate Downscaling Experiment (CORDEX-Africa) project, this study evaluates potential climate change impacts on water resources using an ensemble of six RCMs-GCMs in the Dano catchment in Burkina Faso.The catchment experiences seasonally limited water availability, and like other catch-ments of the region, it has experienced the severe droughts of the 1970s (Kasei et al., 2009) which resulted in a decline of discharge in many West African catchments. A few studies have already investigated the impacts of projected climate change on water resources in West Africa (see Roudier et al., 2014, for a review).Many of these studies have used an approach based on hydrological models driven by a single RCM or GCM dataset (e.g., Mbaye et al., 2015;Cornelissen et al., 2013;Bossa et al., 2012Bossa et al., , 2014)).Therefore, uncertainty related to the choice of the climate model was not explicitly evaluated.However, other studies have used multi-climate model datasets (Kasei, 2009;Ruelland et al., 2012, Aich et al., 2016); most of these studies have resulted in a diverging projected hydrological change signal.Climate model outputs have often been bias corrected to fit the historical climate variables and then used as input for hydrological models, but few have investigated the necessity of performing such corrections in detecting the signal of future climate change impacts on water resources. The current study aims to investigate the future climate change impacts on the hydrology of the Dano catchment in Burkina Faso, thus contributing to the management of water resources in the region.Besides the small scale of the catchment that implies addressing scale issues, the novelty of the study includes a water-energy budget analysis.Specifically, it has the following objectives: (i) evaluate the historical runs of six RCMs-GCMs at the catchment scale; (ii) analyze the climate change signal for the future period of 2021-2050 compared to the reference period of 1971-2000; (iii) evaluate the ability of the climate models to reproduce the historical discharge; (iv) assess the impacts of climate change on the hydrology of the catchment by the middle of the 21st century; and (v) perform an ecohydrological analysis of the catchment under climate change. Study area The study was carried out in the Dano catchment covering a total area of 195 km 2 in the Ioba province of southwestern Burkina Faso (Fig. 1).The catchment is one of the study areas of the WASCAL project (West African Science Service Center on Climate Change and Adapted Land Use, http://www.wascal.org),whose main target is to increase the resilience of human and environmental systems to climate change. The major land uses in the catchment include shifting cultivation, which accounts for one-third of the catchment area; natural vegetation albeit converted into agricultural and fallow lands forms part of the Sudanian region characterized by wooded, scrubby savannah and abundant annual grasses.Sorghum (Sorghum bicolor), millet (Pennisetum glaucum), cotton (Gossypium hirsutum), maize (Zea mays), cowpeas (Vigna unguiculata), and groundnut (Arachidis hypogaea) are the major crops cultivated in the catchment. The catchment is characterized by a flat landscape with a mean slope of 2.9 % and mean altitude of 295 m a.s.l.(above sea level).According to Schmengler (2011), a mean annual temperature of 28.6 • C was recorded, while mean annual rainfall ranged from 800 mm to 1200 m for the period of 1951-2005.The catchment receives monsoonal rains with a dry season occurring in the months of November to April, with the wet season being experienced in the months of July to September.This kind of rainfall pattern limits water availability, especially in the dry season; hence, communities in the catchment are vulnerable to water scarcity since they heavily rely on surface water. Plinthosol characterized by a plinthite subsurface layer in the upper first meter of the soil profile accounts for 73.1 % of the soil types in the catchment; other soil types found within the catchment include gleysol, cambisol, lixisol, leptosol, and stagnosol (WRB, 2006). Climate data Observed mean daily temperature and daily precipitation used in the study were collected from the national meteorological service of Burkina Faso (DGM).The dataset covers the reference period of 1971-2000.Although the national observation network includes several rainfall gauges and synoptic stations, solely the data of the Dano station were used as it is located in the study area. An ensemble of six RCM-GCM datasets is exploited in the study (Table 1).The RCM-GCM simulations were performed in the framework of the CORDEX-Africa project.The datasets were produced by three RCM groups (CCLM: Climate Limited-area Modelling Community, Germany; RACMO22: Royal Netherlands Meteorological Institute, Netherlands; HIRHAM5: Alfred Wegener Institute, Germany) using the boundary conditions of four GCMs (CNRM-CM5, EC-EARTH, ESM-LR, NorESM-M).Each dataset consists of historical runs and projections based on emission scenarios RCP4.5 and RCP8.5 (Moss et al., 2010).The retrieved data (precipitation and temperature) range from 1971 to 2000 for the historical runs and from 2021 to 2050 for the RCPs. An extent of 9 nodes (3 × 3, which is the RCM models' resolution degraded by a factor of 3) of the African CORDEX domain, surrounding the catchment, was delineated to simulate the catchment's climate (Fig. 1b).The areal mean of the 9 nodes was used to evaluate the simulated precipitation against the observations at the Dano station (the reference station).The climate variables (historical and projected) of the extent of 9 nodes were used as inputs for the hydrological simulation model as well. Due to the discrepancy between the RCM-GCM data resolution (0.44 • , about 50 km × 50 km) and the hydrological modeling domain (about 18 km × 11 km) the data of each node were separately used as climate input for the hydrological simulation model.Therefore, for each period (historical and projected scenarios) nine simulations corresponding to the nine nodes are performed per RCM-GCM.The monthly water balance for each RCM-GCM is then calculated as the arithmetic mean. Bias correction of precipitation data The RCM-GCM ensemble was evaluated to get an estimate of the historical simulated variables for the catchment by comparing RCM-GCM based simulations of historical climate variables to the observations provided by the National Meteorological Service (DGM).As presented in Sect.3.1, whereas temperature simulated by the models' ensemble encompassed the observed temperature with moderate deviation, precipitation simulated by individual RCMs-GCMs exhibited biases such as overestimation of annual precipitation as well as misrepresentation of the timing of the rainy season.A precipitation bias correction was therefore applied to the six RCMs-GCMs following the non-parametric quantile mapping using the empirical quantiles method (Gudmundsson et al., 2012).For each member, transfer functions (TFs) were derived using observed and modeled precipitation for the period of 1971-2000; afterwards the transfer functions were applied to the projected climate scenarios (period 2021-2050). However, a consistent application of bias correction is subject to numerous hypotheses that need to be fulfilled, at the risk of altering the climate change signal (Muerth et al., 2013;Ehret et al., 2012;Hagemann et al., 2011).This includes the hypotheses of reliability, effectiveness, time invariance or stationarity, completeness, etc. (a complete discussion of these hypotheses is provided by Ehret et al. (2012)).Precipitation in the Dano region is characterized by a strong decadal variability and a non-stationary annual behavior (Oyerinde et al., 2014;Karambiri et al., 2011;Waongo, 2015), which implies that a TF derived from a short period (e.g., a decade) does not fulfill the time invariant hypothesis.Similarly, a TF derived from a short period precludes the hypothesis of completeness and is likely not to be suitable for application to a period that does not overlap the derivation period as TFs are likely to change from one period to another (Piani et al., 2010). A cross-validation approach (e.g., Lafon et al., 2013;Teutschbein and Seibert, 2013) using the periods of 1971-1990 and 1991-2000 for calibration and verification, respectively, showed that for the climate model ensemble biases in precipitation are in general reduced by the bias correction method (Fig. 2).However, due to the mentioned decadal signal that characterizes precipitation variability in the region, considerable deviations between bias corrected and observed precipitation (up to 40 mm month −1 ) are still noticeable.Therefore, a consistent application of bias correction Ehret et al. (2012), to guarantee that the climate change signal is not altered by the bias correction approach under changing conditions.To get a better approximation of the completeness hypothesis, the TFs for each climate model were derived using all the historical climate data available at the reference station (period 1971-2000). Hydrological modeling Observed and RCM-GCM based (historical runs and projections) data were used as climate input for the Richards equation based version 9.05.04 of the Water flow and balance Simulation Model (WaSiM) (Schulla, 2015).WaSiM is a deterministic and spatially distributed model, which uses mainly physically based approaches to describe hydrological processes.The model configuration as applied in this study is shown in Table 2. Schulla (2015) gives more details of the model structure and processes in the Model Description Manual. A previous study confirmed the suitability of WaSiM to model the hydrology of the Dano catchment.Details of the model setup and parameterization are available in that study (Yira et al., 2016).Briefly summarized, the model was calibrated and validated using observed discharge for the period of 2011-2014, daily time steps, and a regular raster-cell size of 90 m.Latin hypercube sampling (LHS) was used to identify and optimize sensitive parameters (drainage density, storage coefficient for surface runoff, and storage coefficient for interflow) with the sum of the squared error set as an objective function.Following the LHS, several model parameterizations lead to equally good model quality measures.Out of these good parameter sets, the one scoring the highest sum of the Pearson product-moment correlation coefficient (r 2 ), Nash-Sutcliffe efficiency (NSE, Nash and Sutcliffe, 1970), and Kling-Gupta efficiency (KGE, Gupta et al., 2009;Kling et al., 2012) were used as the best parameter set. In the absence of long-term observation discharge for the catchment, the reliability of the model parameters in time could not be assessed in a classical way.Therefore, a soft validation approach was adopted.The approach consisted in determining, based on the Standardized Precipitation In-dex, whether the calibration/validation years represented normal years in the catchment (considering the historical period of 1990 to 2014).This evaluation showed that both calibration and validation periods are normal and reflect the annual rainfall pattern in the catchment for the period 1990-2014 (Fig. 1_supplementary materials in Yira et al., 2016, shows this evaluation).Therefore, the model parameters for the catchment are expected to be reliable for a long period. In addition to the validation using the discharge, the model was further validated against soil moisture under the dominating soil type and groundwater level recorded by four piezometers.Minimum values of 0.7 for NSE, KGE, and r 2 were achieved during the calibration and validation using observed discharge.r 2 was higher than 0.6 for soil moisture and groundwater level.Therefore, no further model calibration was done in the current study. Discharge simulated with RCM-GCM historical runs (bias corrected and non bias corrected) was compared to the discharge obtained with observed historical climate data.These comparison runs showed that bias correction was necessary for RCM-GCM based simulations to reproduce the historical discharge regime.To integrate the potential effect of bias correction on climate change signal as discussed in Sect.2.3 and raised by different authors (e.g., Muerth et al., 2013;Ehret et al., 2012;Hagemann et al., 2011), the hydrological model was run with both bias corrected and non bias corrected climate inputs for the climate model ensemble. No hydrologic observations (discharge, soil moisture, and groundwater level) are available for the reference period in the catchment.The expected climate change for an RCM-GCM is therefore expressed as the relative difference between simulated hydrological variables under the reference period and future period (2021-2050). Ecohydrologic analysis A concept of water-energy budget (Tomer and Schilling, 2009;Milne et al., 2002) was applied to estimate the effectiveness of water and energy use by the catchment as it undergoes climate change.While experiencing climate change, a trend towards the optimization of total unused water-P ex (1) and energy-E ex (2) existing in the environment is usually observed.Plotting P ex against E ex allows for determination of the ecohydrologic status of the catchment.The climate change signal can therefore be detected by the shift in this status.The direction of the shift indicates whether the catchment experienced water stress or increased humidity.The approach was used to test its validity in analyzing the interplay between temperature increase and precipitation change as projected by the RCM-GCM ensemble. where P ex is the unused water; E ex is the unused energy; P is the precipitation; ET a is the actual evapotranspiration; and ET p is the potential evapotranspiration. Assessment criteria A set of evaluation measures was used to analyze the RCM-GCM historical runs, to assess model performance and to estimate the effects of different climate scenarios on hydrological variables. i. P factor: measures the percentage of observed climate data covered by the RCM-GCM ensemble historical runs. ii.The R factor, calculated following Eq.( 3), indicates for an observation series how wide the range between minimum RCM-GCM and maximum RCM-GCM for precipitation and temperature is, compared to the observation: where Var is the climate variable (e.g., precipitation); n is the observation data points; σ is the standard deviation; obs is the observation; Si min is the minimum value of the RCM-GCM ensemble; and Si max is the maximum value of the RCM-GCM ensemble. iii.The normalized root-mean-square deviation (NRMSD): expresses the deviation of each RCM-GCM based precipitation and temperature from the observations.iv.The Pearson product-moment correlation coefficient (r 2 ), the percent bias (PBIAS), the Nash-Sutcliffe efficiency (NSE) (Nash and Sutcliffe, 1970), and the Kling-Gupta efficiency (KGE) (Gupta et al., 2009;Kling et al., 2012) assess the RCM-GCM based discharge simulations' ability to reproduce discharge computed using observed climate data. v. The change signal ( ) in climate and hydrological variables (precipitation, temperature, and discharge) expresses the difference between projected and historical values (Eq.5); and where Var is the change signal for the evaluated variable (e.g., discharge); Var Pro j is the projected value of the variable (period of 2021-2050 under RCP4.5 and RCP8.5); and Var Ref is the reference value of the variable (period of 1971-2000). vi.The Wilcoxon (1945) rank-sum test was used to compare the discharge change signal with bias corrected and non bias corrected precipitation data following Muerth et al. (2013).The test evaluated the null hypothesis "discharge change signal under bias corrected data equals discharge change signal under non bias corrected data".The rejection of the test at 5 % implies that future discharge change under bias correction and no bias correction are significantly different.If the test is not rejected, both discharge change under bias correction and change under non bias correction yield the same result, and thus bias correction does not alter the climate change signal on projected discharge. Historical runs analysis The comparison between RCM-GCM historical runs and observations for temperature and precipitation is done for the reference period of 1971-2000 for average monthly values.The correlation coefficient is plotted against the NRMSD (Fig. 3) for a cross-comparison between RCMs-GCMs in order to assess the relative ability of each RCM-GCM to represent historical climate conditions in the catchment.The correlation coefficient for the RCM-GCM ensemble is in general higher than 0.7 for both precipitation and temperature.The highest coefficients (0.96) are scored by CCLM-ESM for temperature and HIRAM-NorESM for precipitation.The RCM-GCM ensemble mean outscores five members of the RCM-GCM ensemble with regard to temperature and precipitation (Fig. 3). The RCM-GCM ensemble shows a clear deviation from observed precipitation compared to temperature (Fig. 3).HIRAM-EARTH and CCLM-EARTH present the lowest deviation for temperature and precipitation, respectively.The RCM-GCM ensemble mean outscores four out of six RCMs-GCMs for temperature and precipitation with regard to the deviation from observed data. Figure 4a and b show a trend towards an overestimation of annual precipitation throughout the reference period for the RCM-GCM ensemble when precipitation data are not bias corrected (UC).Although the RCM-GCM ensemble presents a large dispersion (R factor = 4.3), only 50 % (P factor = 0.5) of observed precipitation is covered by the RCM-GCM ensemble.After bias correction (BC), the RCM-GCM ensemble agrees in general with the observed precipitation (P factor = 0.8); moreover, the dispersion of climate model based precipitation decreases (R factor = 3.2). The mean annual precipitation pattern is in general well captured by all RCMs-GCMs (Fig. 4c and d).However, the climate models' ensemble, when not bias corrected, covers only 50 % of monthly precipitation despite a large dispersion (Fig. 4c).After bias correction, the agreement between RCM-GCM based precipitation and observation is considerably improved (Fig. 4d), and the uncertainty band of the climate model is considerably reduced (R factor = 0.1).How- ever, a slight positive bias is still presented by the climate models' ensemble. Figure 5 shows that the RCM-GCM ensemble fully captures the annual temperature pattern (P factor = 100 %).However, a gap of up to −4 • C between some climate models and observations is noted.This translates into an R factor reaching 8.2.On average, RACMO-EARTH shows an underestimation of temperatures throughout the year, whereas HIRAM-NorESM indicates an opposite trend. Climate change signal The RCM-GCM ensemble exhibits a mixed annual precipitation change signal between the reference period and future period (2021-2050) (Table 3).CCLM- CNRM, RAMCO-EARTH, and HIRHAM-NorESM project a precipitation increase of about 2.5 to 21 %, whereas CCLM-ESM and CCLM-EARTH indicate a decrease of 3 to 11 %.Bias correction has a minor impact on these signals, as the magnitude of projected precipitation increase ranges from 1 to 18 % and the decrease is around 5-13 % after bias correction. A much more complex intra-annual precipitation change signal is projected by the climate models' ensemble (Fig. 6).CCLM-CNRM and HIRHAM-NorESM, which projected increased annual precipitation, are characterized by increased rainfall from May to June followed by decreased rainfall in August.RAMCO-EARTH shows increased rainfall throughout the season except in July.The decrease in annual precipitation projected by CCLM-ESM and CCLM-EARTH is consistent throughout the entire season.The climate model en-semble consistently projects a mean monthly temperature increase of about 0.1 to 2.3 • C under RCP4.5 and 0.6 to 2.5 • C under RCP8.5, leading to an increase in potential evapotranspiration for the climate models' ensemble. Historical discharge RCM-GCM ensemble based discharges are compared to discharge simulated using observed climate data to evaluate the climate models' ability to reproduce the historical discharge regime over the reference period (Fig. 7).Accordingly, performances (r 2 , NSE, KGE, and PBIAS) achieved by the climate models are presented in Table 4. Figure 7a shows good agreement between bias corrected climate model based discharge and observation based discharge, with a trend towards discharge overestimation for some climate models (RACMO-EARTH, CCLM-EARTH, and HIRAM-EARTH). Discharge change Projected change in annual discharge for the period of 2021-2050 compared to the reference period is presented in Table 5.As for precipitation, a mixed annual discharge change signal is projected by the climate model ensemble.With bias corrected climate data, the following is projected: (i) a more than 15 % decrease in annual discharge, which is a consequence of relative decrease in precipitation and a consistent increase in potential evapotranspiration for CCLM-ESM, CCLM-EARTH, and HIRHAM-EARTH (RCP8.5);and (ii) a low to very high (3 to 50 %) increase in total discharge due to increased precipitation not counterbalanced by the evapotranspiration for CCLM-CNRM, RAMCO-EARTH, HIRHAM-NorESM, and HIRHAM-EARTH (RCP4.5).This divergence between climate models is reflected through a large amount of uncertainty associated with the projected annual discharge (Fig. 8).The projected intra-annual change in discharge (Fig. 9) is very similar to the precipitation change signal shown in Fig. 6.The discharge changes with non bias corrected climate data are similar in trend (with however differences in magnitude) compared to the changes observed with bias corrected data, which is consistent with changes in the climate signal induced by the bias correction. Y. The Wilcoxon (1945) rank-sum test, testing the significance of the difference between bias corrected and non bias corrected discharge change signals for the climate model ensemble, indicates that the signals are not different at a p-level equalling 0.05.A p-value of the Wilcoxon rank-sum test higher than 0.5 is required under both RCP4.5 and RCP8.5 to reject the null hypothesis (H 0 : discharge change with bias corrected data = discharge change with non bias corrected data).Hence, the bias correction impact on discharge change signal alteration can be considered negligible. The sensitivity of the catchment discharge to precipitation and temperature change is tested by plotting, for each member of the climate models' ensemble, predicted precipitation, and temperature change against predicted discharge change.The result shows that change in total discharge cannot be strongly related to change in potential evapotranspiration (Fig. 10a).However, a high sensitivity of river discharge to precipitation change (Fig. 10b) is observed.Under scenario RCP4.5, an increase of +5 % in precipitation leads to an increase in discharge of about +12.5 %, whereas a decreased precipitation on the same order leads to a decrease in discharge of −13 %.The same simulations under RCP8.5 yield a +8.3 % discharge increase and a −14.7 % discharge decrease.Interestingly, under RCP8.5 and assuming comparable precipitation between reference and future periods, a discharge decrease of about −3.2 % should be expected (Fig. 10b). Ecohydrologic status The ecohydrologic status of the catchment for the reference period and future scenarios RCP4.5 and RCP8.5 is shown in Fig. 11 to illustrate the use of energy and water by the catchment while undergoing temperature increase and precipitation change.Moving left to right along the "Excess water − P ex " axis indicates that the environmental conditions in the catchment lead to an increase in discharge (CCLM-CNRM, RAMCO-EARTH, and HIRHAM-NorESM).Re- Moving upwards along "Excess evaporative demand − E ex " implies drier environmental conditions due to an increase in evaporative demand and soil water deficit.Except for HIRAM-EARTH, all the climate models project drier conditions (increase in Excess evaporative demand) under RCP4.5 as a result of an increased temperature not compensated for by the amount and/or timing of precipitation.Increased evaporative demand, with marginally aggravated drier conditions, is shown by CCLM-ESM, HIRAM-NorESM, CCLM-EARTH, and the RCM-GCM ensemble mean under RCP8.5. The ecohydrologic status of the catchment, irrespective of climate model and emission scenario, projects a shift for the period of 2021-2050 compared to the reference period.Therefore, differences in climate conditions between the two periods influence the hydrology (discharge, evapotranspiration, precipitation) of the catchment. Historical runs' analysis All GCMs and RCMs applied in this study have proved in previous works to fairly reproduce the climatology of West Africa (Cook and Vizy, 2006;Dosio et al., 2015;Gbobaniyi et al., 2014;Paeth et al., 2011).The RCM-GCM ensemble reasonably captures the annual cycle of temperatures, and following several authors (e.g., Buontempo et al., 2014;Figure 9. Monthly discharge change between the reference period and the future period (2021-2050) under emission scenarios RCP4.5 and RCP8.5.BC and UC refer to bias corrected and non bias corrected, respectively.Waongo et al., 2015) no bias correction was performed for this climate variable.The systematic positive bias and large deviation from observed precipitation exhibited by the climate models' ensemble in this study are also reported by several authors (Nikulin et al., 2012;Paeth et al., 2011) for the southern Sahel Zone.This deviation motivated the bias correction of precipitation.After correction, the positive bias is significantly reduced for all individual climate models and the improvement is clearly visible. In general, the RCM-GCM ensemble mean outperforms individual climate models for both temperature and precip-itation.This is due to the fact that individual model errors of opposite sign cancel each other out (Nikulin et al., 2012;Paeth et al., 2011).However, the climate models' ensemble mean should not be considered an expected outcome (Nikulin et al., 2012).Rather, considering a large ensemble of climate models should be seen as necessary to properly perform future climate impact studies in the catchment (Gbobaniyi et al., 2014) and to assess the range of potential future hydrological status required for adaptation and management strategies. Interpretation chart Figure 11.Plot of excess precipitation (P ex ) vs. evaporative demand (E ex ) for the reference period and emission scenarios RCP4.5 and RCP8.5 (2021-2050) for the RCM-GCM ensemble.The shift in RCP dots compared to the reference period's dot indicates the effects of climate change on the catchment hydrology.P ex and E ex for each period are calculated from the annual average rainfall, potential evapotranspiration, and actual evapotranspiration. Climate change signal Compared to the period of 1971-2000, a clear temperature increase signal is projected for 2021-2050 by the six members of the RCM-GCM ensemble in the catchment.This feature is common to all multi-model ensemble studies performed in the region (IPCC, 2014).It is further in line with the historical temperature change observed in the region as reported by Waongo (2015), who used the same observation dataset applied in the current study.He reported an average +0.31 and +0.17The precipitation change projected by CCLM-CNRM and HIRHAM-NorESM, wetter conditions associated with drought during specific months, is consistent with the change reported by Patricola and Cook (2009) for the West African region.They highlighted an increase in precipitation in general, but also noted drier June and July months.A similar result is achieved by Kunstmann et al. (2008) in the Volta Basin, albeit with a decrease in precipitation at the beginning of the rainy season in April. Precipitation change projected by CCLM-ESM and CCLM-EARTH is consistent with the decrease in the June-July-August season noted by Buontempo et al. (2014).A reduction in precipitation during the rainy season is also achieved with RegCM3, driven by ECHAM5 in the Niger River Basin (Oguntunde and Abiodun, 2012).Up to 20.3 % reduction of precipitation in some months is projected, but an increased precipitation during the dry season is also expected. A critical analysis of CCLM (by Dosio et al., 2015) showed that the model is significantly influenced by the driving GCM (including EC-Earth, ESM-LR, and CNRM-CM).Such an analysis was not found for RACMO and HIRAM.Overestimation of precipitation is a common feature of the RCM-GCM ensemble applied in this study, which could suggest that the RCMs inherit the bias from the GCM (Dosio et al., 2015).Consistently with Paeth et al. (2011), the relation between RCM trend and driving GCM cannot be observed in the current study as CCLM-EARTH and RACMO-EARTH clearly show opposite trends although both are driven by EC-EARTH.Differences in projected trends are also highlighted by individual RCMs driven by different GCMs (e.g., CCLM-EARTH and CCLM-CNRM). Historical discharge Compared to the observation based simulation, non bias corrected RCM-GCM based discharge is characterized by an overestimation of annual discharge.This misrepresentation results from the positive precipitation bias presented by the climate models' ensemble.The bias correction significantly improves the ability of all members of the climate models' ensemble to reproduce the historical discharge regime.By comparing simulated discharge with bias corrected and non bias corrected precipitation data, it clearly appears that the bias correction methodology is effective with regard to both discharge regime and total discharge; thus, it increases the quality (correspondence between projection and observation) of the model (Murphy, 1993).However, a trend towards dis-charge overestimation was noticed after bias correction of precipitation.This could be related to i. the relatively long period used for the bias correction (1971-2000).As noticed by Piani et al. (2010), fragmenting the correction period to decade and deriving several transfer functions can improve the bias correction result and further contribute to capturing the decadal rainfall change that characterizes the West African climate; and ii. the fact that temperature was not bias corrected.This led to ET p values that vary from one RCM-GCM to another since ET p after Hamon is computed based on temperature values only (Table 2).As a result, a relatively large range of potential evapotranspiration is observed for the climate models as an ensemble (Table 6). In view of the general good simulation of historical discharge for the climate models' ensemble, it is worth noting that running the hydrological model with simulated climate data of one node at a time (Sect.2.2) has reasonably bridged the discrepancy between RCM-GCM data resolution and the hydrological modeling domain (see Fig. S1 of the Supplement for the hydrological spread of the 9-node approach and Fig. S2 for the difference in precipitation between the 9-node approach and the standard 3 × 3-node average approach).Therefore, the approach can be considered eligible for climate change impact assessment for small-scale catchments in which interpolation methods create issues related to the representation of climate variables (particularly precipitation).However, besides regional climate specificities, its reliability might depend on the extent of the RCM domain used to simulate a given catchment climate, which in the case of this study was set at 0.44 • × 3 over 0.44 • × 3, which is the RCM models' resolution degraded by a factor of 3. In data available regions, historical RCM based discharges should necessarily be compared to historical observed discharge, which could not be done in the current study. Discharge change A mixed annual discharge change signal is projected by the climate models' ensemble for the period of 2021-2050.These trends agree with several studies in the region (Table 7), although all were carried out at the mesoscale and macroscale. -Negative trend (CCLM-ESM and CCLM-EARTH): a discharge decrease of 30 to 46 % is reported by Ruelland et al. (2012) using MadCM3 and MPI-M in the Bani catchment.A similar trend, resulting from a combination of temperature increase and precipitation decrease, was reached by Mbaye et al. (2015) using the REMO climate model in the Upper Senegal Basin, as did Cornelissen et al. (2013) and Bossa et al. (2014) in the Térou and Ouémé catchments in Benin, respectively. -Positive trend (CCLM-CNRM, RAMCO-EARTH, and HIRHAM-NorESM): an increase of 38 % in annual discharge in the region is reported by Ardoin-Bardin et al. (2009) for the Sassandra catchment (south of the Dano catchment) using climate projections of HadCM3-A2.This results from a 11 % increase in precipitation not counterbalanced by the 4.5 % increase in potential evapotranspiration. - This mixed hydrological change signal is the result of high uncertainties associated with the precipitation change projected by climate models for the catchment (IPCC, 2014).The Wilcoxon rank-sum test further indicated that bias correction did not significantly alter these discharge change signals.Due to the high sensitivity and nonlinear response of the catchment discharge to precipitation, any change in precipitation will have a strong impact on the discharge; the impact will further be pronounced under RCP8.5 compared to RCP4.5.Irrespective of emission scenario, change in potential evapotranspiration alone failed to strongly explain change in annual discharge (Fig. 10a); this is partly explained by the fact that the environmental system of the catchment is water limited and not energy limited. The water limited environment of the catchment might also explain the performance of the hydrological model for the climate models' ensemble despite the non bias correction of temperature data (up to 4 • C gaps between observed and simulated temperature were noticed for some months, Sect.3.1).The annual evaporative demand for the climate models' ensemble, including RACMO-EARTH which underestimated observed temperature for the reference period, exceeds (almost doubles) precipitation (Table 6).In such a system, also characterized by extended periods with little to no precipitation (November-May), actual evapotranspiration is strongly controlled by precipitation (Guswa, 2005;Schenk and Jackson, 2002).Therefore, an increase in ET p is not necessarily translated into an increase in ET a as limitation in precipitation (soil moisture) dictates water fluxes (Newman et al., 2006) (e.g., CCLM-EARTH and CCLM-ESM in Table 6). Ecohydrologic status The E ex − P ex plot (Fig. 11) allows accurate displaying of climate change impact on the catchment hydrology, as main water balance components (precipitation, discharge, and evapotranspiration) are presented in an integrated manner.The overall ecohydrologic effect of climate change on the catchment, as shown by the plots, is a trend towards drier environmental conditions due to increased evaporative demand-E ex .This denotes an increase in potential evapotranspiration higher than the increase in actual evapotranspiration.By contrast, change in the proportion of precipitation converted to discharge-P ex appears specific to each climate model, with a marginal trend towards discharge increase for the models' ensemble under RCP4.5 and discharge decrease under RCP8.5. All the climate models that project a precipitation increase result in an ETa increase due to the warmer climate.For some of the climate scenarios the projected increase in ETa outperforms the increase in precipitation, resulting in a decrease in river discharge (unused water).This indicates that the catchment ecosystem (defined as the vegetation within the catchment and provided by the land use and land cover map of the catchment) is able to optimize the use of water and energy available in the environment, thus reducing unused water (P ex ) with temperature increase (Caylor et al., 2009).Such an optimization, although not investigated in this study, may lead plants to change the allocation of fixed carbon to various tissues and organs (Collins and Bras, 2007;Milne et al., 2002).The suitability of the catchment area for the current plant species could also be affected (McClean et al., 2005) by the projected climate change. In a previous study (Yira et al., 2016), land use in the catchment was found to be characterized by conversion from savannah to cropland, implying the reduction of the vegetation-covered fraction, root depth, leaf area index, etc.Such a land use and land cover change strongly affects the ecohydrologic status of a catchment.Tomer and Schilling (2009) highlighted that removal of perennial vegetation leads to an increase in both Excess Water-P ex and Excess evaporative demand-E ex .Combining this land use change with climate change impact would therefore on the one hand aggravate water stress for plants in the catchment and on the other hand increase the unused water in the catchment. Conclusion An ensemble of six RCM-GCM data, all produced in the framework of the CORDEX-Africa project, were used as input to a hydrological simulation model to investigate climate change impact on water resources in the Dano catchment by the mid-21st century.The ability of the RCM-GCM ensem-ble to simulate historical climate and discharge was evaluated prior to future climate change impact assessment. The six climate models fairly reproduce the observed temperature.By contrast, bias correction was necessary for all climate models to accurately reproduce observed precipitation and historical discharge.The applied bias correction method further proved not to alter the discharge change signal.However, projected discharge change signals with and without bias corrected data were tested very comparably.This result indicates that (i) it is safe to perform bias correction; (ii) bias correction improves the quality of climate models' outputs; and (iii) it is not necessary to perform bias correction in order to detect a future discharge change signal in the catchment when relative changes in climate variables are used as reported by several authors (e.g., Muerth et al., 2013;Hagemann et al., 2011). A temperature increase is consistently projected by the models' ensemble.This reinforces the commonly acknowledged warming signal for the region.However, the lack of agreement among models with regard to the projected precipitation change signal creates considerable uncertainty about how the catchment discharge will evolve by 2050.As discharge in the catchment is strongly determined by precipitation, no clear trend in future development of water resources can be concluded due to the high variability of the different climate models and scenarios.Therefore, potential increase and decrease in future discharge have to be considered in climate change adaptation strategies in the region. The ecohydrological concept as applied in this study proved to fully capture climate change impact on the hydrological conditions within the catchment as both discharge change signal, precipitation, and actual/potential evapotranspiration change signal are consistently displayed by the E ex − P ex plot; it further brings insights into the catchment hydro-climatic conditions, which can assist in development of climate change adaptation strategies.The adopted "one node at a time" approach also appears suitable for the assessment of climate change impact on catchment hydrology of small-scale catchments.The approach enables the use of climate model input for catchments which are much smaller than the size of one climate model grid cell and, hence, approximate climate impact analyses for this scale. The results further underline on the one hand the need for a larger ensemble of projections to properly estimate the impacts of climate change on water resources in the catchment and on the other hand the high uncertainty associated with climate projections for the West African region.Therefore, assessing future climate change impact on water resources for the region needs to be continuously updated with the improvement of climate projections. Figure 1 . Figure 1.Location map: (a) Dano catchment, (b) its location in Burkina Faso, and (c) in West Africa.(b) RCM domain used in the study. Figure 2 . Figure 2. Absolute precipitation bias (corrected and not corrected) for the model ensemble compared to the observed data for the period of 1991-2000.The transfer functions were calibrated for the period 1971-1990. Figure 3 . Figure3.Statistics of RCM-GCM based precipitation and temperature compared to observations (Obs) for the reference period(1971- 2000).Climate model data are not bias corrected.Statistics are computed based on average monthly values. Figure 5 . Figure5.Monthly air temperature derived from climate models and observations for the reference period.Data are not bias corrected.P factor = 100 % and R factor = 8.2. Figure 6 . Figure6.Climate change signal of precipitation, air temperature, and evapotranspiration between the reference and future (2021-20150) periods under emission scenarios RCP4.5 and RCP8.5.BC is bias corrected and UC refers to non bias corrected. Figure 7 . Figure 7. Historical RCM-GCM based discharge simulations and observation based discharge: (a) RCM rainfall is bias corrected, (b) RCM rainfall non bias corrected. Figure 10 . Figure 10.Change in the annual discharge as a response to potential evapotranspiration (a) and precipitation (b) change under emission scenarios RCP4.5 and RCP8.5.Projected precipitation, potential evapotranspiration, and discharge changes are calculated comparing period 1971-2000 to period 2021-2050. • C decade −1 increase for the minimum and maximum temperature, respectively, for the region considering the period of 1960-2010.However, the climate models' ensemble does not agree on the projected precipitation change signal as wetter (RAMCO-EARTH), drier (CCLM-ESM and CCLM-EARTH), as well as mixed (CCLM-CNRM, HIRHAM-NorESM, and HIRAM-EARTH) trends are shown by the individual model.It is worth noting that the Dano catchment is located in a region where the Coupled Model Intercomparison Project Phase 5 (CMIP5) models showed divergent precipitation change for the mid-21st century(IPCC, 2014). Table 1 . RCM-GCM products and the corresponding label used in the study. Table 2 . Selected submodels and algorithms of WaSiM. Table 3 . Projected rainfall change between the referenceand future (2021-2050) periods with bias corrected and non bias corrected RCM-GCM based simulations. Table 4 . Performance of RCM-GCM based discharge compared to observation based discharge.Performance is calculated using mean monthly discharges for the period 1971-2000. Yira et al.: Impact of climate change on hydrological conditions in a tropical West African catchment Table 5 . Mean annual discharge change projected by the RCM-GCM ensemble for the period 2021-2050 compared to the reference period 1971-2000. duction of discharge is experienced when moving the other way round (CCLM-ESM and CCLM-EARTH). Table 6 . Mean annual water balance components per RCM-GCM for the historicaland projected (2021-2050) periods.Precipitation data are bias corrected. Table 7 . Selected studies of climate impact on water resources in the West African region.
2019-01-02T05:08:44.068Z
2017-04-20T00:00:00.000
{ "year": 2017, "sha1": "cc6fba789dda3c6c368addbc94d11975519c986d", "oa_license": "CCBY", "oa_url": "https://www.hydrol-earth-syst-sci.net/21/2143/2017/hess-21-2143-2017.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e82e1b14a5ba37bc7f0f20ea1253851c99db4172", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
235707286
pes2o/s2orc
v3-fos-license
Nasal colonization and antibiotic resistance patterns of Staphylococcus species isolated from healthy horses in Tripoli, Libya ABSTRACT The present study investigated the colonization rates and antimicrobial susceptibility of Staphylococcus species isolated from the nostrils of healthy horses. A nonselective laboratory approach was applied, followed by confirmation using a Phoenix automated microbiological system. Among the 92 horses included in the study, 48.9% (45/92) carried Staphylococcus species of mostly the coagulase-negative staphylococci (CoNS) type yielding 70 Staphylococcus strains. Of these strains, 37.1% (26/70; 24 CoNS and 2 coagulase-positive staphylococci; CoPS) were identified as methicillin-resistant staphylococci (MRS) expressing significant resistance to important antimicrobial classes represented mainly by subspecies of CoNS. This is the first study reporting a high prevalence of various Staphylococcus species, particularly strains of CoNS expressing multidrug resistance patterns of public health concern, colonizing healthy horses in Libya. Methicillin-resistant staphylococci (MRS) of clinical and public health concern have been increasingly reported in horses [17]. They have been reported in the skin and nasal passages of horses infected with diverse staphylococci causing opportunistic and zoonotic infections, such as methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-resistant coagulase-negative staphylococci (MRCoNS) [18,19,22]. These pathogens are widely reported to have variable geographic distributions worldwide in clinical and healthy horses [16]. The current study investigated the prevalence and antimicrobial susceptibility of Staphylococcus species isolated from the nostrils of 92 healthy horses from four locations in Tripoli in January-February 2018. The inclusion criteria for the horses were no signs of any illnesses and no treatment with any medications, including antimicrobials, for at least three months prior to this study. The ages of the included horses ranged from 0.3 to 24 years (mean, 7.5 years), and the sex distribution was 77.2% (n=71) female and 22.8% (n=21) male. The breeds of the horses included Thoroughbreds (n=81, 88.0%), English (n=3, 3.3%), Arabians (n=1, 1.1%), and half-breeds (n=1, 1.1%). The remaining 6 (6.5%) horses were of unspecified breeds. The study was approved and registered by the Postgraduate Studies Department of the Ministry of Education, Libya (Reference number, 14144). The purpose of the study and benefits of participation were explained to all owners before the study, and informed consent was obtained. A total of two nasal specimens were obtained from the nostrils of each horse using moist sterile cotton-tipped (in-house) swabs. Each swab was inserted approximately 10 cm, pressed slightly against the mucosa, and then transferred to the laboratory and processed within 4 hr. Samples were streaked onto mannitol-salt agar and blood agar, respectively, and then incubated for 24-48 hr at 35°C. A typical colony was selected from each plate and further examined with a Gram stain and catalase test. Presumptive staphylococci isolates were further tested with a BD Phoenix automated identification and susceptibility testing system (PAMS, MSBD Biosciences, Sparks, MD, U.S.A.) for definite characterization at the genus and species levels and to determine the susceptibility against antimicrobial agents. The antimicrobial susceptibility profiles of confirmed staphylococci and the decision of MRS were carryed out based on the interpretation of the Table 2). The present study revealed high colonization rate of staphylococci compared with regional data [12]; 27.2% of the horses were colonized in both nostrils, and 21.7% of the horses were colonized in a single nostril, mostly by CoNS, which accounted for 84.4% of the horses testing positive. Strains of the S. sciuri group and S. xylosus are commensal bacteria of the skin and mucous membranes of different animal species, particularly horses, causing opportunistic infections in animals (e.g., mastitis or dermatitis) and zoonotic infections in humans in direct contact with them [14,25]. S. xylosus is frequently isolated from animal products (e.g., cheese, milk, and meat products) and further used in the development of flavour and food processing [11]. In Africa, reports of MRS and MRSA in equine populations are very rare, and the reported colonization rate in the current study is higher compared with that in a regional report [12]. In the current study, most of the horses colonized with MRS were colonized with the CoNS group, mainly represented by the S. sciuri group. Such finding has reportedly been linked to antibiotic selection pressure as well as a previous history of prolonged antibiotic treatments, hospitalization, and transportation stress [20][21][22]. Horses are frequently colonized with diverse CoNS strains found in healthy and clinical animals showing concerning multidrug resistance phenotypes with variable epidemiological distribution [1]. For instance, higher prevalences of CoNS and MRCoNS and no/low prevalence of MRSA have been reported in healthy horses in the Netherlands [7,9]. In Africa, 6 (MLSB) IPM, FOX, CTX, AMP, PenG, OXA, AMX ERY, CLI GEN, DAP, STX, TEC, VAN, LZD, MUP, NFZ, CIP, mainly from food-producing animals, with limited available information on companion and pet animals [5]. In Libya, MRSA is the most reported nosocomial pathogen exclusively isolated from human healthcare settings; however, critical multidrug-resistant Gram-negative rods and vancomycin-resistant enterococci (VRE) have recently emerged [2][3][4]. A novel recent study from Libya involving healthy and clinical cats and dogs revealed high colonization rates of various Staphylococcus species showing high multidrug (i.e., methicillin) resistance patterns and belong mainly to CoNS species (MRCoNS) [13]. CoNS are recognized as a reservoir of virulent and antibiotic resistance genes that can be acquired by other staphylococci mainly through the transconjugant transfer of the staphylococcal cassette chromosome mec(SCCmec) transposon containing the mecA gene, as in the case of transfer between S. aureus and S. epidermidis [24]. Another mec gene homolog, which is currently designated as mecC and has about 70% comparability with the mecA gene, was identified in 2011, and it carried by SCCmec elements isolated from animals, human clinical specimens, and the environment [8]. Unfortunately, due to the limitations of the current study, these important and widely reported genes were not investigated within the studied collection. Antimicrobial susceptibility testing of Staphylococcus species could provide empirical data to guide therapy and overcome recurrent infections; however, other factors should be taken into consideration, such as the infection site, infection type, age, and health status [1]. For instance, the response to fluoroquinolone therapy for MRS is unpredictable despite in vitro susceptibility, and resistance may develop during antibiotic therapy [23]. Although suitable antimicrobials, such as chloramphenicol and trimethoprimsulfonamide can be used, the use of other critically important drugs, such vancomycin, linezolid, mupirocin, rifampin, and fusidic acid, should be limited due to the controversial nature of their use in horses and importance to human medicine. In addition, controlling the colonization of MRS in horses is problematic because transient colonization can be normal in horses, and thus decolonization with an antimicrobial therapy is not recommended [23]. Staphylococcus species of veterinary origin are difficult to characterize due to the lack of developed diagnostic protocols [15]. The automated Phoenix system has been widely used as an effective tool to identify species of staphylococci and determine antimicrobial susceptibility; however, a few species are not easy to identify [6]. For instance, S. pseudintermedius is frequently misdiagnosed as S. aureus due to their close phenotypic characteristics, which require advanced molecular protocols for a definite identification, such as PCR and matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) technology [15]. Furthermore, the current findings reveal the need to follow therapeutic guidelines and control and prevention measures to minimize the spread of Staphylococcus species with antimicrobial resistance. Further analyses of MRS colonization and transmission and the associated risk factors are required in equine medicine adopting the One Health concept.
2021-07-03T05:20:42.178Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "e28e7f04ec053e3dc80e1dcaa474d06855a37cb6", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jes/32/2/32_2031/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e28e7f04ec053e3dc80e1dcaa474d06855a37cb6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17650718
pes2o/s2orc
v3-fos-license
Quantizing Poisson Manifolds This paper extends Kontsevich's ideas on quantizing Poisson manifolds. A new differential is added to the Hodge decomposition of the Hochschild complex, so that it becomes a bicomplex, even more similar to the classical Hodge theory for complex manifolds. These notes grew out of the author's attempt to understand Kontsevich's ideas [Kon95a] on quantizing Poisson manifolds. We introduce a new differential on the Hochschild complex, so that it becomes a bicomplex, see Theorem 2.1. This differential respects the Hodge decomposition of the Hochschild complex of a commutative algebra discovered by Gerstenhaber-Schack [GS87]. Thus, the Hochschild complex becomes similar to the ∂-∂-complex in complex geometry. Hopefully, Hodge-theoretic ideasà la Deligne-Griffiths-Morgan-Sullivan [DGMS75,Sul77] will eventually result in proving Kontsevich's Formality Conjecture, which implies local quantization of an arbitrary Poisson manifold, a hard problem that has been around for almost twenty years [BFF + 78], see [Wei95] for the most state-of-the-art survey of this subject. 1. Kontsevich's Formality Conjecture 1.1. Some formalities. Let A = C ∞ (X) be the algebra of smooth functions on a smooth real manifold X. Let C • (A, A) be the (local ) Hochschild complex of the algebra A over X, i.e., C n (A, A) = {φ ∈ Hom(A ⊗n , A) | φ(f 1 , . . . , f n ) is a differential operator in each entry f 1 , . . . , f n }. The Hochschild-Kostant-Rosenberg Theorem [HKR62] provides the computation of the corresponding Hochschild cohomology which is nothing but the smooth multivector fields on X. Notice that both the Hochschild complex and its cohomology (more precisely, the suspensions thereof) are differential graded Lie algebras (DGLA's). The com- of a complex K • , carries a Gerstenhaber bracket [Ger63], which may be defined naturally, see [Sta93], by observing that the Hochschild cochains are exactly the coderivations of the tensor coalgebra T (A) = n≥0 A ⊗n ; then the Gerstenhaber bracket is just the bracket of coderivations. This bracket defines a DGLA structure on C • . The (suspended) Z-graded vector space H • = Λ • T X[1] of multivector fields is a DGLA with respect to the trivial differential d = 0 and the Schouten-Nijenhuis bracket of multivector fields: Every DGLA L • induces an obvious DGLA structure on its cohomology H • (L • ) with the trivial differential. In this sense the Hochschild-Kostant-Rosenberg Theorem may be refined by saying that the cohomology DGLA of the Hochschild complex is isomorphic to the DGLA of multivector fields, see [GS88]. Kontsevich's Formality Conjecture [Kon95a] suggests a further, profound refinement of the Hochschild-Kostant-Rosenberg Theorem. Conjecture 1.1 (Kontsevich's Formality Conjecture). The Hochschild complex C • is quasi-isomorphic as a DGLA to its cohomology H • . We recall that two DGLA's L and L ′ are quasi-isomorphic, if there is a chain L = L 1 → L 2 ← L 3 → · · · ← L n = L ′ of DGLA homomorphisms all of which induce isomorphisms of cohomology. Perhaps, in this conjecture one should consider a weaker notion of quasi-isomorphism, where the intermediate steps L 2 , L 3 , . . . , L n−1 are L ∞ -algebras rather than DG Lie. Remark 1.2. There exists a natural embedding H • → C • , "a multivector field is considered as a multiderivation of the algebra A of functions", which induces an isomorphism of cohomology. This embedding does not satisfy the conditions of the conjecture, because it does not respect the brackets. It is not hard to come up with a counterexample. In rational homotopy theory, there is a similar discouragement: the mapping H • (X) → Ω • (X) which takes a cohomology class to its harmonic representative is a quasi-isomorphism, but the product of two harmonic forms is not harmonic in general. Nevertheless, the two differential graded associative algebras H • (X) and Ω • (X) are quasi-isomorphic for a compact Kähler X, see Section 1.4. We will take the "physical" point of view and discuss evidence for the Formality Conjecture after seeing what implications it has. Deformation quantization of Poisson manifolds. Recall that a deformation quantization [BFF + 78] of a Poisson manifold X, whose algebra of smooth functions will be denoted by A, as above, is a formal deformation of A in the direction of the Poisson bracket. More precisely, it is a multiplication where ab is the usual, undeformed multiplication and {a, b} is the Poisson bracket. When the Poisson bracket is nondegenerate, i.e., coming from a symplectic structure, the existence of deformation quantization was proven by De Wilde and Lecomte [DWL83] and Fedosov [Fed85]. When the Poisson bracket is arbitrary, the existence of deformation quantization (even locally, for R n ) is an open problem. The remarkable fact noticed by Kontsevich is that if you assume the Formality Conjecture, the problem of quantization will be solved. Theorem 1.3 (Kontsevich). The Formality Conjecture for a manifold X implies deformation quantization of any Poisson structure on X. Proof. We will only sketch the idea of the proof; (some) details may be found in Kontsevich's Berkeley lectures [Kon95b]. According to Deligne-Schlessinger-Stasheff-Goldman-Millson's approach to deformation theory, see [GM90,Mil91,SS85], with each DGLA L • one can associate the formal moduli space is the Lie group corresponding to the Lie algebra L 0 , and the defining equation is a quasi-isomorphism of DGLA's, then the corresponding formal moduli spaces can be identified. This is done using the standard machinery of minimal models. Formal deformations are usually formal paths in the formal moduli spaces. Consider the cases of the above two DGLA's associated to a manifold X. The formal moduli space associated to the DGLA C Now suppose that the Formality Conjecture is true. Then the two moduli spaces M Q and M P are identified. Given a Poisson structure on X, we can connect it by a straight line with the origin in the moduli space of Poisson structures. Consider this line as a formal path. Using the isomorphism of the moduli spaces, we have a formal path in the moduli space of quantizations, which is a deformation quantization we were looking for. 1.3. Evidence for the Formality Conjecture. It is known that every nondegenerate Poisson structure can be quantized [DWL83,Fed85]. Moreover, the following analogue of the conjecture related to the nondegenerate case is true. Consider the Hochschild DGLA of the function algebra with respect to the deformed multiplication on a symplectic manifold X. The other DGLA will be the multivector fields on X with the differential being the Schouten-Nijenhuis bracket with the canonical Poisson tensor on X. Then the two DGLA's are quasi-isomorphic. One uses Fedosov's connection to prove this fact, [Kon95a]. Different evidence comes from quantizing an arbitrary Poisson structure in the Lie-theoretic context. The recent theorem of P. Etingof and D. Kazhdan [EK96] solves the conjecture of Drinfeld asserting that every Poisson Lie group has a canonical quantization. Mirror Symmetry predicts that for a Calabi-Yau manifold Y , the corresponding holomorphic version RΓ(Y, C • ) of the Hochschild DGLA for the sheaf of holomorphic functions on Y gives rise to a smooth formal moduli space, which may be interpreted as the moduli space of "noncommutative Calabi-Yau manifolds". On the other hand, one can show that the holomorphic multivector field DGLA RΓ(Y, H • ) produces a smooth formal moduli space. Thus, if C • and H • were known to be quasi-isomorphic, it would prove the smoothness of the first moduli space as confirmed by Mirror Symmetry. Formality in rational homotopy theory. Kontsevich's Formality Conjecture has a very close analogy with the Deligne-Griffiths-Morgan-Sullivan Formality Theorem [DGMS75]: the Sullivan model of a compact Kähler manifold X is formal. The Sullivan model of X may be represented by the differential graded commutative algebra (DGA) Ω • (X) of smooth differential forms on X. Formality means that Ω • (X) is quasi-isomorphic to its cohomology DGA H • (X). A simple way to prove this is using Hodge theory, see [Sul77]: decompose the de Rham differential into the holomorphic and antiholomorphic parts: d = ∂ +∂. Standard Hodgetheoretic arguments (the ∂-∂-Lemma of [DGMS75]) imply that (Ker ∂, d) ⊂ (Ω • (X), d) is an embedding of DGA's, which is a quasi-isomorphism. On the other hand, the natural morphism (Ker ∂, d) → (Ker ∂/ Im ∂, d) = (H • (X), 0) of DGA's is also a quasi-isomorphism for the same reasons. Hodge theory for the Hochschild complex In this section, we are going to develop Hodge theory in the Hochschild context. The construction of Hodge decomposition of the Hochschild complex of a commutative algebra A over a field of characteristic zero goes back to Gerstenhaber and Schack [GS87], who decomposed the Hochschild complex C • (A, A) into the direct sum of C p,q (A, A), with the Hochschild differential d acting like∂ in the Dolbeault complex: d : C p,q (A, A) → C p,q+1 (A, A). Here we add a new ingredient to Gerstenhaber-Schack's Hodge theory: we define an extra, ∂-like differential d ′ : C p,q (A, A) → C p−1,q (A, A) on the Hochschild complex, so that it becomes a bicomplex. This bicomplex is similar to the ∂-∂-complex of a compact Kähler manifold: the total cohomology of the bicomplex is equal to the cohomology of one of the differentials. Our new differential is also similar to the differential B of the cyclic cohomology complex. Together with the Hochschild differential, the differential B provides the cyclic cohomology complex with the structure of a bicomplex and, moreover, respects the Hodge decomposition of the cyclic cohomology complex in a similar way, see J.-L. Loday [Lod89]. Another similarity between the cyclic B and our differential is that the cohomology of both vanish. We will recall Hodge decomposition of the Hochschild complex, following the modification of M. Ronco, A. B. Sletsjøe, and H. L. Wolfgang, see [BW95] for more detail. Let r and s be positive integers and n = r + s. The shuffle product of tensors a 1 ⊗ · · · ⊗ a r ∈ A ⊗r and a r+1 ⊗ · · · ⊗ a n ∈ A ⊗s is the element sgn(σ)a σ −1 (1) ⊗ · · · ⊗ a σ −1 (n) ∈ A ⊗n , where the summation runs over those σ ∈ S n for which σ(1) < σ(2) < · · · < σ(r) and σ(r + 1) < · · · < σ(n). Let Sh k denote the image of shuffle products of k elements in the tensor algebra T (A) = n≥0 A ⊗n . By definition Sh 0 = T (A) and Sh 1 = n>0 A ⊗n . We have a filtration of the tensor algebra where p, q ≥ 0. Of course, one can describe C p,q as A-valued functionals φ on the subspace of A ⊗p+q generated by the shuffle products Sh p of p elements, such that φ vanishes on the shuffle products of p + 1 elements. One can check that the Hochschild differential induces a mapping d : C p,q → C p,q+1 , d 2 = 0, and C n (A, A) ∼ = p+q=n C p,q . This gives the Hodge decomposition of the Hochschild complex into the direct sum of complexes: 3. The cohomology of the differential d ′ vanishes. The spectral sequence associated to the second filtration ′′ F q = j≥q C i,j collapses at ′′ E 1 , which is equal to 0. 4. Suppose that A is the algebra of smooth functions on a manifold or regular functions on a nonsingular affine scheme. Then the first spectral sequence collapses at ′ E 1 , which is equal to H •,0 (A, A), the space of global multivector fields. This coincides with the total cohomology of the bicomplex. Remark 2.2. The differential d ′ , being a derivation of the Gerstenhaber bracket of degree −1, defines the structure of a DGLA on the Hochschild complex (C • (A, A)[1], d ′ ). However, the total complex (C • (A, A)[1], d + d ′ ) is only a differential Z/2Z-graded Lie algebra: the degree of the total differential d + d ′ is equal to one modulo two. If φ = 0 on Sh r , then kφ = 0 on Sh r+1 , therefore k is well-defined on C p,q and maps it to C p+1,q . A straightforward computation shows that kd ′ + d ′ k = id. Thus, the cohomology of d ′ vanishes. Since ′′ F q / ′′ F q+1 = C •,q with the differential d ′ , ′′ E q 1 = H • ( ′′ F q / ′′ F q+1 , d ′ ) = 0, and the second spectral sequence collapses. 4. If A is a regular algebra of functions, its Hochschild cohomology H • (A, A) is equal to the space of multivector fields, see [HKR62]. The multivector fields are skew multiderivations of A and therefore project bijectively on H •,0 (A, A). In this case, the differential d ′ vanishes on all Hochschild cocycles, because derivations of A vanish on constants. Therefore ′ E 1 = ′ E 2 = · · · = ′ E ∞ = H •,0 (A, A). The computation of the total cohomology then follows from Part 2.
2014-10-01T00:00:00.000Z
1997-01-16T00:00:00.000
{ "year": 1997, "sha1": "236600f547156b2fb89da2c10d3f87bc9355cfcb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/q-alg/9701017v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f99996fd627ed3391adf062f11d1b116995ed581", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221825301
pes2o/s2orc
v3-fos-license
The prognosis and prevention measures for mental health in COVID-19 patients: through the experience of SARS Due to the high pathogenicity and mortality, the COVID-19 disaster caused global panic and anxiety. At present, diagnosis and treatment are of great concern. As time progresses, however, the sequelae caused by many other organ system complications and treatments will become increasingly obvious, and psychosomatic symptoms are one of these changes with great potential impact. Studies have shown that symptoms like poor sleep quality, anxiety and even delirium are not uncommon in patients during isolation. By summarizing the follow-up study on mental and psychological health of SARS in the past 10 years, and combining the characteristics of the existing cases of COVID-19, we will provide suggestions for the prevention and treatment of psychological diseases in clinical practice. Background Nearly 8 months have passed since the first confirmed case of Corona Virus Disease-2019 (COVID-19) was found in Wuhan, China, in December 2019, and now the world is threatened by this highly infectious and devastating viral pneumonia. As of 26th August 2020, around 23,965,059 people in the world have been infected with the virus, and this number is still surging every day in Europe and America, the case fatality rate even exceeded 5%. In China, the earliest outbreak point, although the infection has been basically controlled, a large number of discharged patients are facing the burden brought by the complications of various organ systems and the mental pressure from the change of social roles and environment. As for the 187 countries where the outbreak is still raging, effective mental health interventions are needed to respond to the treatment pain of a large number of isolated patients, the depression of severe patients and the psychological pressure of medical staff under overload work. In view of the strong homology between the COVID-19 genome sequence and Severe Acute Respiratory Syndrome (SARS) [1], the currently reported cases of COVID-19 and SARS patients have great similarities in the symptoms of various organ systems such as pulmonary fibrosis, viral myocarditis, acute renal injury and other symptoms [2], and mental health is no exception [3]. In the follow-up study of SARS patients for nearly 10 years, it can be found that the mental symptoms will continue to exist after discharge and a considerable proportion of survivors will have mental diseases such as depression and Posttraumatic Stress Disorder (PSTD) in the recovery period due to the influence of social factors and identity changes [4]; moreover, severe patients, high-dose corticosteroids and the status of medical staff are independent predictors of mental illness. Since the transmissibility and duration of critically ill patients with COVID-19 are much higher than that of SARS [5,6], it is reasonable to believe that under the long-term pressure of this epidemic on patients, medical staff and the health care system, there will be a larger population facing various potential pressures in the future, such as the persistence or even worsening of residual symptoms, complications and side effects of treatment, and the negative impact on their quality of life and social role function after returning to society. Currently, we know little about the long-term impact of mental health of COVID-19 patients, so the importance of detection and treatment of comorbidity is selfevident, and mental health services can play an important role in rehabilitation. By summarizing the potential impact of major emergent events on mental health and behavioral symptoms such as anxiety, stress, fear, violence, progressive neurological dysfunction and cognitive impairment of survivors, and combining previous research and data characteristics of mental disorders caused by infectious diseases such as SARS, this paper concluded the mental and psychological symptoms that may appear on a large scale in the prognosis of patients with COVID-19 a highly dangerous disease spreading globally today, the risk factors and possible effective intervention programs. Then, it can further provide a comprehensive reference for the researchers in the field of global mental health and front-line workers, such as improving the medication plan in the treatment period, conducting certain psychological counseling as early as possible, and focusing on high-risk groups in the followup period of prognosis, moreover, regularly carrying out psychological assessment and psychological intervention treatment and giving certain drugs to control the further development of mental and psychological problems and the progressive damage of other organs for patients with severe emotional stress. The universality and severity of mental diseases Excluding acute lung, heart, kidney and other organic damage, psychological damage may have a wide and long-term impact. The outbreak of SARS in 2003 can be regarded as a mental health disaster. PTSD is the most common chronic psychosis, followed by depression. The problem of mental illness after the epidemic is extremely serious, and its influence covers all fields and is closely related to everyone. In one follow-up study of 195 adult SARS patients, 10 to 18% reported PTSD 1 month after discharge, and the severity of symptoms was related to higher perceived life threat and lower emotional support [7]. Another study of 180 SARS survivors (average age 36.9 years) also pointed out that the psychological distress in them after 1 month's recovery was real and significant, and negative assessment may play a key role in the development of psychological distress in these survivors. For instance, the negative appraisal of acute phase with significant influence was 'passing the SARS virus on to the family', whereas the convalescence after recovery was changed to 'drug side-effects' and 'permanent damage to health' [8]. For SARS patients, compared with a control group, the study showed that the stress level was higher during the outbreak and no sign of decline after 1 year. SARS survivors also showed worrisome depression, anxiety and post-traumatic symptoms, with an alarming proportion (64%) of those who had reached the diagnostic criteria for psychosis [9]. More studies have shown that in the 4 years of SARS treatment, PTSD occurred in a high proportion (44.1%) of subjects who recovered from SARS, with persistent psychological distress and social function weakening [10], and such a huge psychological trauma was also observed in Middle East Respiratory Syndrome (MERS) [11]. However, mental and psychological stress and post-traumatic stress disorder not only cause great persistent damage to patients themselves, but also post serious threats to their contacts. Due to the pressure of contagion, people tend to vent their emotions to innocent people, and those who feel exposed to danger may have a strong tendency to insult others and even resort to violence against them [12], like what happened in Ukraine. Notably, the mental health of survivors in terms of stress, anxiety, depression and post-traumatic symptoms did not improve over time, but gradually deteriorated [13]. The study found that most of the first symptoms of depression occurred in the late stage of treatment and 3 months after discharge, while the existence of some protective factors such as feeling lucky, increased civic awareness and a sense of solidarity, played a role in alleviating depression, but as the epidemic passed, the buffering effect gradually weakened, and then the depression continued to grow [8]. Another report also confirmed this: when life is no longer under imminent threat, other concerns gradually emerge, such as the fear in isolation ward, complications of SARS and their treatment (such as avascular necrosis of the femoral head and osteoporosis), discrimination, unemployment, economic pressure, or the threat of other subsequent outbreaks of infectious diseases that may or have occurred [14]. The pathogenesis of mental diseases after highly infectious diseases It is reported that neuroendocrine, neural structure and neuroimmune disorders play an important role in depression and PTSD. On the one hand, coronavirus can guide the immune damage mechanism in the body and induce a large number of inflammatory reactions. In the blood and Cerebrospinal fluid (CSF) of patients with coronavirus pneumonia, Interleukin-6(IL-6), C-reactive protein (CRP), Interleukin-1(IL-1), Tumor Necrosis Factor (TNF) and other proinflammatory factors are all increased to varying degrees [15], among with high levels of IL-6 are strongly related to progressive neurological dysfunction, neurodegeneration and cognitive impairment [16]. This probably the mechanism how the virus affects the patient's mental health molecularly. And many evidences show that the inflammatory reaction in the lung, heart and nervous system will last for a long time in the prognosis of SARS [17], and lead to a variety of chronic inflammatory diseases such as pulmonary fibrosis, viral myocarditis, multiple sclerosis [18], while the chronic inflammation, especially the inflammatory complications of the nervous system related to the changes of brain structure and function, which is considered to be one of the pathogenesis of depression [19,20]. On the other hand, the inflammatory reaction of the nervous system will cause the metabolic changes of the basal ganglia and affect the information processing through the cingulate cortex. Changes in the cingulate cortex may indicate increased sensitivity to conflict and negative life events [21], which further complements the possible pathogenesis of depression in patients with COVID-19 . Secondly, in PSTD, the increase of corticotropin releasing factor (CRF) induced by traumatic stress may play an important role in the process of enhancing activated macrophage releasing facto [22], and these macrophages are widely distributed in the peripheral nervous system and brain, thus causing stress response to the decrease of stress threshold. The long-term prognosis of COVID-19 patients with mental disorders after rehabilitation is similar to that of SARS and other major epidemics [23], and its severity and persistence are immeasurable. The study points out that with the delay of time and the deepening of symptoms, the proinflammatory cytokines in patients with depression, fatigue and severe insomnia will further increase, which will aggravate the complications of other organ systems [24,25]. Long term depression is an important cause of cognitive decline and neurodegenerative change, and the activation of macrophages in the blood and microglia in the brain caused by chronic inflammation will further aggravate neurodegenerative change, which is closely related to the transformation from depression to dementia [19]. Therefore, extensive psychological treatment and counseling services need to be paid attention to and to strictly control the progress of mental and psychological diseases. High risk factors of mental disorders caused by COVID-19 In previous retrospective studies, the occurrence of psychosocial diseases is often strongly associated with specific population or special environmental factors. For example, although the stress level of medical staff among SARS survivors during the outbreak was similar to that of non-medical staff, a year later, the pressure level of medical staff was significantly higher, with higher levels of depression, anxiety, post-traumatic symptoms and scores. After controlling factors such as age, gender and education, health care workers are six times more likely to score above the General Health Questionnaire-12 (GHQ-12) threshold due to the dual pressure of being patients and medical staff [26], so the situation of SARS survivors among medical staff is particularly worrying and they need more attention [26]. 17.3% of the medical staff without the disease had similar mental symptoms [27], including chronic fatigue, pain, weakness, depression and sleep disorders [28]. Over the long term, of the 90 subjects recruited, a quarter had post-traumatic stress disorder (PTSD) 30 months after SARS, while 15.6% had depression [13]. What's more, different occupations have different effects on mental health, doctors have more physical symptoms compared to nurses [29]. Our results, therefore, highlight the need to enhance the preparedness and capacity of healthcare professionals to detect and manage the psychological consequences of future comparable outbreaks of infectious diseases. The infection of virus is not only the battle of patients and medical staff, but also the battle of the whole country and people, as it is closely related to everyone and no one can stay out of the business. Studies have showed that the residents in the areas with high prevalence of SARS, regardless of age, continued to develop more serious post-traumatic interference than those in the areas with low prevalence of SARS. In addition, the prevalence of PTSD was significantly higher in the elderly and residents in SARS epidemic areas [30]. The most relevant predictor is the financial stress caused by reduced income after illness, and other items included gender, range of activities, dietary restrictions, travel restrictions, clothing disinfection and infection control [31]. These different pressures are highly correlated with the psychological stress index, so the causes of psychological problems are not only isolated factors. During one-year follow-up period, evidence showed that both women and medical staff were risk factors for mental maladjustment. Female survivors had higher stress levels, more severe depression and anxiety, and more severe post-traumatic stress symptoms, and their GHQ-12 scores were three times higher than that of men. Among them, women and participants with low education level are more likely to have avoidance symptoms [7]. In a study of college students, the number of stressors and the use of avoidance strategies can positively predict psychological symptoms [32] . When controlling stressors, positive coping can positively predict life satisfaction [33]. Therefore, in the face of large-scale stressors such as COVID-19, increasing psychological counseling and treatment services will be of great significance. We found that the high-risk factors for the prognosis of patients with COVID-19 are not only related to the specific population, but also highly related to the treatment drugs in the acute phase. For example, the use of a large number of interferon in the treatment of coronavirus will lead to emotional depression, anxiety, shortterm memory disorders of sleep disorders through the impact on endocrine and changes in neurotransmitter and immune system changes [34]; meanwhile, high dose of exogenous steroids can cause reversible memory damage through the effect on hippocampal metabolism [35]. The necessity of prevention of mental disorders in COVID-19 patients From the current situation of transmission, we are concerned about the mental state of people around the world. With regard to SARS, whose outbreak was brought under control in a few months mainly due to the two characteristics of SARS coronavirus: First, coronavirus is a large virus and is not easy to mutate; second, the infected people show obvious symptoms when they are likely to spread the disease, so they can be identified and isolated in time [12]. However, for this new coronavirus disease, the mutation of the virus is currently unknown, but some studies believe that novel coronavirus in different regions has some genetic differences. Furthermore, Zhong Nanshan believes that the longest incubation period of coronavirus is 21 days, with an average incubation period of 7 days [36]. A large number of asymptomatic cases have been reported in many countries, which has brought great difficulties to identification and isolation while greatly increasing anxiety and fear in people. For these patients with severe acute respiratory syndrome, the dread of this new type of fatal infectious disease, the fear of relatives and friends' infection, the experience of witnessing adverse events during hospitalization, the uncertainty of prognosis, and the nursing experience in Intensive care unit (ICU) are all terrible experiences. Under an overburdened medical system, followed by the contagion of stress in special occupational groups, especially for medical workers, inadequate personal protective equipment, unsystematic isolation process, and inactive disease control will make fear spread rapidly among individuals [12]. Treatment suggestions for improving social psychological prognosis Psychological intervention is very necessary for patients, health care workers and the general public. We should pay attention to the high-risk groups prone to disease, and at the same time eliminate the stimulation factors that induce PTSD and depression as much as possible, and timely and early intervention is beneficial to reduce the incidence of PTSD [37]. We should not be overly pessimistic the prognosis of patients with COVID-19, although part of the source of mental illness is poor prognosis [4]. For some drugs that cause long-term adverse physical conditions, such as hormones [35], they are abandoned or restricted in the treatment guidelines of COVID-19, and according to all materials known on PubMed, the newly add drugs have no clear long-term impact. According to statistics, effective antidepressants such as chronic antidepressants can greatly reduce the degree of depression and anxiety symptoms [38], its chronic effect is related to the repair and possible construction of injured neural network; moreover, it can promote the growth of axons and dendrites [39], and may also play a role in the organ injury caused by chronic inflammation. However, we are concerned about the psychological status of medical workers in some countries, because some studies have shown that during the outbreak, medical workers experienced anxiety, depression and poor sleep. Such situation began to improve 2 weeks after the adoption of SARS prevention and control measures, and systematic SARS prevention plan improved the mental state of medical workers [40]. At the same time, the country's active and timely resource scheduling and integration is particularly important, and ecouraging the health care workers and improving the long-term potential for adverse mental health conditions is an important part of the country's response to the epidemic [41]. It should be noted that stigmatization of disease may have a longer-term impact than disease itself. Two thousand three SARS virus, HIV and Spanish influenza, which was mistakenly associated with Spanish people, all of them have caused huge psychological and economic losses to specific groups and regions. The stigmatization of diseases, especially of certain ethnic groups, can lead to adverse social emotions and promote racism. In 2006, don C. des jarlais PhD proposed a positive and effective strategy for the United States to remove the stigma of Acquired Immune Deficiency Syndrome (AIDS) and protect people suffering from AIDS, such as the President of the United States posing with AIDS patients, and the promulgation of the Americans with Disabilities Act [42]. It is the common goal for all countries to pay attention to the prevention and treatment of diseases and take measures to protect the patients and it is also a task for every country to remove the stigma of disease. Therefore, in the COVID-19 treatment and prognosis stage, evaluation of psychological problems requires flexible skills due to subjects' fear of stigmatisation. Generally speaking, it is of great significance to carry out active and effective mental health education for the whole people, popularize relevant mental education assistance work as soon as possible, and provide timely psychological support with self-coping strategies to enhance their resilience and reduce their fear, anxiety and pressure. At the same time, it is more important to ensure the psychological status of specific occupations and high-risk groups, and the possibility of long-term subjective pain and occupational difficulties for medical workers infected with COVID-19 could be included in the data model, which will provide a better basis for future prevention and treatment [42]. Effective prior preparation, such as effective risk communication and the provision of psychological first aid before psychological stress and behavioral symptoms, can improve the effectiveness of post disaster intervention [12]. In addition, further longitudinal follow-up study on survivors can timely assess their mental consequences and determine the relationship between patients and their families' mental health, and it is also an important way to prevent and early correct patients' psychological and social stress during the follow-up period, and can provide a strong basis and reference for future emergent events. Conclusion The mental and psychological impact of major public health events may be long-term and even far outweigh the initial threat to life. The psychological symptoms of COVID-19 patients and cured survivors include poor sleep quality, low mood, anxiety, inattention, and two of the most common and severe long-term mental illnesses-depression and PSTD, which will cause great burden and threat to society in the long run. Age, gender, incidence rate and risk factors of acute phase are highly correlated with psychological morbidity, thereby providing a reliable basis for early identification and psychological support. At the same time, medical staff, as a special group fighting against epidemics and susceptible to infection, are facing multiple and complex pressures and are more prone to mental and psychological diseases, especially in the infected medical staff. Inflammatory response and other organ system prognosis sequelae are the possible mechanism for the generation and continuous progression of depression and PSTD. Therefore, it will exert a significant effect by reducing the dosage of corticosteroids in the treatment period, actively treating the psychiatric or psychological complications in the rehabilitation period with psychological counseling and treating severe cases of depression with chronic antidepressant medication. In addition, the government should provide public policy support to survivors, de-stigmatize the disease, and develop follow-up research programs to strengthen the evaluation of COVID-19 for long-term mental illness.
2020-09-22T13:59:56.013Z
2020-09-22T00:00:00.000
{ "year": 2020, "sha1": "7af30d7f3b5d933f05a15267b8847ca5e3f50684", "oa_license": "CCBY", "oa_url": "https://bpsmedicine.biomedcentral.com/track/pdf/10.1186/s13030-020-00196-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7af30d7f3b5d933f05a15267b8847ca5e3f50684", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220045807
pes2o/s2orc
v3-fos-license
Increased expression of connexin 43 in a mouse model of spinal motoneuronal loss Amyotrophic lateral sclerosis (ALS) is one of the most common motoneuronal disease, characterized by motoneuronal loss and progressive paralysis. Despite research efforts, ALS remains a fatal disease, with a survival of 2-5 years after disease onset. Numerous gene mutations have been correlated with both sporadic (sALS) and familiar forms of the disease, but the pathophysiological mechanisms of ALS onset and progression are still largely uncertain. However, a common profile is emerging in ALS pathological features, including misfolded protein accumulation and a cross-talk between neuroinflammatory and degenerative processes. In particular, astrocytes and microglial cells have been proposed as detrimental influencers of perineuronal microenvironment, and this role may be exerted via gap junctions (GJs)- and hemichannels (HCs)-mediated communications. Herein we investigated the role of the main astroglial GJs-forming connexin, Cx43, in human ALS and the effects of focal spinal cord motoneuronal depletion onto the resident glial cells and Cx43 levels. Our data support the hypothesis that motoneuronal depletion may affect glial activity, which in turn results in reactive Cx43 expression, further promoting neuronal suffering and degeneration. INTRODUCTION Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease that affects upper and lower motoneurons [1,2]. Although the main ALS hallmark is motoneuronal loss due to motoneuron vulnerability, resident glial cells play a crucial role in ALS pathogenesis. In particular, during the disease progression, a robust neuroinflammation, glial activation and misfolded protein accumulation can be observed, together driving progressive neuronal loss and persistent disabilities [3,4]. Recent evidence on neurodegenerative/inflammatory disorders have highlighted a key role of neuroglial crosstalk, which substantially contributes to neuronal suffering and degeneration [3,4]. Gap junctions (GJs) are characterized by the juxtaposition of two hemichannels (HCs) of adjacent cells, and allow the exchange of ions, metabolites, and other mediators < 1 kDa between intracellular fluids (i.e. GJs-mediated intercellular communication) or between intracellular and extracellular compartment (i.e. HCs-mediated communication) [5,6]. GJs are aggregates in defined plasma membrane regions of adjacent cells forming the socalled GJs plaques, in which GJs are rapidly assembled, disassembled or remodelled [6]. Previous evidence demonstrated that connexins (Cxs), the core GJs-and HCs-forming proteins, exert a prominent role in maintaining physiological functions and promoting reactive activation of glial cells [7]. Indeed, previous reports on transgenic mouse models of ALS, showed an early Cx43-reactive expression on spinal cord microenvironment. This evidence was also observed in aging and in major neurodegenerative disorders, including spinal cord injury and Alzheimer's disease [8][9][10]. It seems likely that ALS has a focal onset in the central nervous system, where microenvironmental conditions are particularly hostile and mediate neurodegeneration spread and progression [2,11,12]. Thus, we developed a mouse model of focal removal of lumbar spinal cord motoneurons using retrograde suicide transport of saporin, conjugated to cholera toxin-B subunit (CTB-Sap) [13,14]. Herein we investigated Cx43, the most abundant GJsand HCs-forming protein of the central nervous system, and its possible role in human ALS, as well as in the CTB-Sap model [13,14]. We have shown that Cx43reactive expression may represent the biological substrate underlying reactive glial activation and neuronal suffering in neurodegenerative diseases. Correlation between GJA1 and GFAP in human ALS We first tested the hypothesis of a potential role of Cx43 in human ALS analysing the z-score of mRNA expression levels in the central nervous system of control and sporadic (s)ALS patients. We used the NCBI Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo/) to select human healthy and ALS gene expression dataset. We analysed the GFAP (encoding for the glial fibrillary acidic protein) and GJA1 (encoding for Cx43) expression levels in central nervous system biopsies of healthy and sALS patients. Our analysis revealed that in sALS patients both GFAP and GJA1 mRNA levels were significantly increased as compared to the healthy counterpart ( Figure 1A, 1B). We then moved to analyse a potential correlation between GFAP and GJA1 performing a linear regression analysis, finding a positive correlation between tested genes in human sALS central nervous system (r 2 = 0.4765, p-value < 0.0001, Figure 1C). CTB-Sap-induced motoneuronal depletion mediates behavioural impairment in mice In order to analyse the effects of motoneuronal loss and its impact on behavioural and neuropathological signs in vivo, we established a model of spinal motoneuronal depletion induced by the neuronal targeting toxin CTB-Sap, which is retrogradely transported throughout axons to the spinal cord. We evaluated the behavioural impact of motoneuronal loss at 0, 7, 21 and 42 days post-lesion (dpl), performing an open field grid walk test ( Figure 2A), tracking the distance covered by mice during the task with a tracking camera, and the number of footfalls over meter with a counting camera (Figure 2A). We found that both healthy control and CTB-Sap lesioned mice were active in the performance and covered an average distance of 3.2 ± 0.5 and 4.2 ± 1.0 meters, respectively (p-value > 0.444, Figure 2B). We also found that CTB-Sap lesioned mice showed a significant increase of the rate of errors as soon as 7 dpl and that such motor coordination impairment was retained up to 42 dpl ( Figure 2C). We confirmed this evidence evaluating the clinical impairment during the time course of disease. Our data indicate that lesioned mice presented a stable impairment and a clinical score of about 2 ( Figure 2D), showing leg extension towards the lateral midline and also affected stepping during locomotion test. CTB-Sap induces typical electromyographic signs of denervation In order to better characterize the denervation in CTB-Sap-injected mice, we performed an electromyographic recording into the left gastrocnemius muscle to find signs of denervation and spontaneous electrical activity. The results of our analysis are reported in Figure 3A, 3B and show that CTB-Sap induces muscle denervation, as suggested by a relevant number of positive sharp waves, fibrillations, fasciculations and neuromyotonia ( Figure 3B). Of note, our electromyographic analysis found no obvious signs of myopathy. Spinal cord neuropathological analysis We then moved to analyse the neuropathological effects of CTB-Sap, by quantifying the impact onto the resident motoneuronal populations. Our analysis revealed a striking reduction of left over right motoneuron number in Rexed lamina IX of CTB-Sap lesioned mice versus healthy control ( Figure 3C, 3D). This depletion is also evident in Figure 3D, which shows representative images of cresyl violet-positive motoneurons in left Rexed lamina IX of healthy control and CTB-Sap mice. Cx43-mediated coupling in Rexed lamina IX glial cells The relevance of astroglial Cx43 in human ALS prompted us to evaluate a potential involvement of this Cx in a reductionist model of spinal motoneuronal loss induced by CTB-Sap. We assessed Cx43 expression in our model, by measuring the Cx43 mean fluorescence AGING intensity (MFI) in the spinal cord of healthy control and CTB-Sap mice, finding a significant MFI increase in GFAP and Cx43 levels in Rexed lamina IX of motoneuronal depleted spinal cord ( Figure 4A, 4B). Such an increase was coupled with morphological changes in astroglial (i.e. GFAP positive) and microglial (i.e. IBA1 positive) cell populations ( Figure 4B). Finally, we analysed the profile plot of GFAP, IBA1 and Cx43 in the spinal cord of healthy control ( Figure 5A) and CTB-Sap-lesioned ( Figure 5B) mice, confirming an increased colocalization between Cx43 and GFAP/IBA1 ( Figure 5A, 5B). DISCUSSION It is known that glial cells, both astrocytes and microglia, hold key physiological roles in the central nervous system, such as immunological surveillance, blood brain barrier function, synaptic activity, neuronal trophism and metabolic support [1,[14][15][16][17][18]. In the last decades, advances have come to suggest a critical role of neuroglial cross-talk and related microenvironmental modulation during neurodegenerative disorders [7,19,20]. Such a role, besides being an attractive target due to its pathophysiological importance, also opens new scenarios to develop potential effective therapeutic strategies. Several in vitro and in vivo models of main neurological conditions such as stroke, multiple sclerosis, Alzheimer's disease and ALS, demonstrated that reactive astrocytes and microglia amplify neuroinflammation and neurodegeneration through aberrant GJs/HCs communication [21]. It is noteworthy that even in aging models, dysregulation of astroglial population and Cx43 dynamic expression profile may be one of the responsible mechanisms for Aβ deposits in the brain [9,22,23]. Notably, an abnormal increase in Cx43 expression has been described as one of the mechanisms for astrocytemediated toxicity in both SOD1(G93A) mice and in the central nervous system of ALS patients [20]. Herein, we first analysed available data on NCBI GEO database to select human ALS transcriptome dataset (E-MTAB-2325) in order to verify whether astrogliosis and reactive Cx43 expression, which are both reported in ALS neuropathology, were positively correlated. Such analysis suggested that astrocytes represent the leading cell population in showing Cx43 expression, and that human astroglial reactive Cx43 finds a correspondence in mice model of motoneuronal diseases. Astroglial cells are able to communicate with each other through Cxsbased GJs, mainly expressing Cx43 [7]. This direct astrocyte-to-astrocyte communication is involved in homeostatic processes within the complex intercellular network they form, allowing metabolites, small molecules and second messengers trafficking. During neurodegenerative disease, central nervous system microenvironment is substantially affected by inflammatory cytokines released by reactive microglia also acting on astroglial cells. Astrogliosis and concomitant reactive Cx43 expression contribute to homocellular and heterocellular communication, also releasing reactive oxygen species and inflammatory mediators. Therefore, such unbalanced communication fosters neurotoxic and proinflammatory loop of neurodegenerative disease [24,25]. We also assessed a toxin-based model of motoneuronal depletion established using CTB-Sap [14,26,27], which selectively targets axon terminals and kills motoneurons by retrograde suicide transport [28,29], thus inducing both muscular denervation and behavioural impairment of motor performance. Our reductionist in vivo model of motoneuronal disorders showed functional deficits and electromyographic signs typical of both transgenic ALS mouse model and human ALS patients [30][31][32]. In particular, our electromyography data revealed that CTB-Sap-induced motoneuronal ablation does not induce myopathy. Indeed, no obvious signs of myopathy were found in motoneuronal depleted mice. In myopathic diseases, in addition to apparent fibrillation potentials and positive sharp waves, normal or early recruitment is found, whereas in our animal model we found profuse fibrillation potentials and positive sharp waves associated with reduced recruitment, that is a typical pattern found in neuropathy and also observed in ALS patients [33,34]. In CTB-Sap induced motoneuronal depletion, we have therefore observed typical ALS electromyographic signs of denervation, thus supporting this model as a valuable tool to study neurodegeneration and central effects of reduced motoneuronal pool. A significant aspect of our model is the evidence of reactive astrocytes expressing Cx43, which suggested an increase in intercellular communication. Our evidence does not support a relationship between neuronal ablation efficiency and glial cells activation, although a potential relationship between spared motoneurons modulating the activation and function of both microglia and astrocytes, may occur. Moreover, enhanced Cx43 expression also activates a positive-loop conditioning ventral horn microenvironment that likely exerts a detrimental effect on spared motoneurons. Accordingly, negative effects induced by Cx43 overexpression have been reported in experimental models of ALS, showing that increased glial Cx43-channels significantly affect neuronal activity and wellness [20]. In particular, experimental evidence supports the hypothesis that Cx43 could exert such a detrimental role when assembled as HCs and exposed to cell membrane. Such an effect may be linked to increased excitotoxic calcium release, reactive oxygen species, glutamate and ATP, thus further inducing neuronal distress and death [1,25,[35][36][37]. The role of microglial cells during neurodegeneration is also of importance, in particular for their role as master regulators of inflammatory cytokine release. Microglia modulates astroglial functions releasing IL-1β and TNFα that have been linked to an overall increase of Cx43-based HCs activity, further sustaining neuronal suffering [38,39]. AGING In the present report, we found an altered glial activity in an experimental model of motoneuronal depletion, resulting in a reactive Cx43 expression. Further studies will help to characterize the molecular mediators and the role of selective silencing and/or pharmacological modulation of Cx43 function. GJs-or HCs-forming protein in CTB-Sap induced focal motoneuronal depletion may also offer the opportunity to evaluate a potential discrepancy of Cx43 biological meaning in the early versus the late stage of disease. Crucial information may be derived by Cx43 knockout models upon neurodegenerative insults, even if potential crossmodulation among Cxs may take place. Of note, the role of microglial GJs and HCs is still matter of debate, in particular on the heterocellular (i.e. microgliaastrocytes) GJs composition. A deeper investigation on the role of Cx43 in microglial cell population and on the crucial role of HCs in neuroglial crosstalk will help to elucidate biological substrates and to highlight potential therapeutic targets in neurodegenerative diseases. Human ALS data For human ALS data, we used the NCBI Gene Expression Omnibus (GEO) database (http://www.ncbi. nlm.nih.gov/geo/) to select human ALS central nervous system transcriptome dataset (E-MTAB-2325) analysing the GFAP (encoding for the glial fibrillary acidic protein) and GJA1 (encoding for Cx43) expression levels. Mesh terms "central nervous system", "ALS" and "Human" were used to identify potential datasets of interest. Healthy control tissues were matched for age, post-mortem (PM) delay and central nervous system region. The samples characteristics are available in Table 1. The analysis of microarray data by Z-score transformation was performed using MultiExperiment Viewer (MeV) software (The Institute for Genomic Research (TIGR), J. Craig Venter Institute, USA), in order to allow the comparison of microarray data independent of the original hybridization intensities and reduce the noise of original intensity signal [40][41][42]. Animal model All experiments were performed in accordance with the principle of the Basel Declaration as well as with the European Communities Council directive and Italian regulations (EEC Council 2010/63/EU and Italian D.Lgs. no. 26/2014). The protocol was approved by the Italian Ministry of Health (auth. no. 1133/2016-PR). All efforts were made to replace, reduce, and refine the use of laboratory animals. Experiments were performed on 8-12 weeks old male 129S2/SvPasCrl (Charles River Laboratories, Calco, Italy), as previously described [13,14]. Briefly, a total number of 16 animals were used in this study, randomly assigned to the HC group (n = 8) or the CTB-Sap (12 μg injected into the left gastrocnemius muscle) lesioned group (n = 8). For CTB-Sap injection, mice were anesthetized with isoflurane (4% for induction, 2% for maintenance). Mice were then observed for up to 42 days post lesion (dpl) evaluating the clinical score based on the following criteria: 0 = healthy; 1 = collapse or partial collapse of leg extension towards the lateral midline during the tail suspension test; 2 = toes curl under at least twice during walking of 30 cm or any part of the foot is dragging along the cage bottom/table; 3 = rigid paralysis or minimal joint movement, foot not being used for generating forward motion; 4 = mouse cannot straighten itself within 30 s after being placed on either side. Electromyography Electromyographic recording was performed as previously described [14]. Briefly, at 42 dpl mice were anesthetized with isoflurane and CTB-Sap injected gastrocnemius muscle was exposed and examined by a portable two-channel EMG device (Myoquick, Micromed S.p.A., Mogliano Veneto, Treviso, Italy) using 1 bipolar concentric needle electrode inserted in the gastrocnemius and 1 grounded electrode. Open field grid walk test Open field grid walk test was performed at 0, 7, 21, and 42 dpl using a platform equipped with a tracking camera and a counting camera. Animals were placed in the arena and were free to move and to explore during the behavioural test. Each performance was recorded for 2 minutes and matched tracking and counting video were analysed off-line using Ctrax tracker software version 0.5.18 for Mac. Ex vivo tissue processing At 42 dpl, spinal cord isolation, cryo-sectioning and immunofluorescence analysis were performed as previously described [43]. Briefly, isolated spinal cords were post-fixed with 4% paraformaldehyde overnight at 4 °C. Samples were then cryo-protected with 30% sucrose in PBS overnight at 4 °C and then embedded in Optimum Cutting Temperature medium. Embedded samples were snap frozen in liquid nitrogen and cut into 20 μm-thick cryosections. Sections were collected on SuperFrost slides and stored at -80 °C until use. Before performing experiments, sections were dried at room temperature for 45 minutes and then washed in deptH 2 O and PBS 2 times for 5 minutes at room temperature. Statistical analysis All tests were performed in GraphPad Prism (version 5.00, GraphPad Software) or RStudio (version 1.0.153, RStudio Inc.). Data were tested for normality using a D'Agostino and Pearson omnibus normality test and subsequently assessed for homogeneity of variance. Data that passed both tests were further analyzed by two-tailed unpaired Student's t-test for comparison of n = 2 groups. For comparison of n ≥ 3 groups, one-way or two-way ANOVA was used where appropriate, and associations between variables were analysed by linear regression and correlation. CONFLICTS OF INTEREST Authors declare no conflicts of interest.
2020-06-25T09:09:08.136Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "059ae23cd286aa4ab08b3488a6450e5c51754806", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.103561", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ac6e123cf7537129d802dfc9577dc97450cd1c4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237768278
pes2o/s2orc
v3-fos-license
A rare case of Kallmann syndrome with bimanual synkinesis Kallmann syndrome is a rare inherited disorder characterized by hypogonadotropic hypogonadism and anosmia or hyposmia. Such cases are mostly diagnosed in adolescent period with complaints of failure to achieve puberty. Early diagnosis and treatment can restore secondary sexual characteristics in such patients. We report a case of a 17-year-old male with Kallmann syndrome who came with hypogonadism and bimanual synkinesis. INTRODUCTION Kallmann syndrome is a rare genetic disorder characterized by failure of an individual to enter puberty. This condition can be associated with a number of phenotypical abnormalities. Precise epidemiological data is lacking due to difficulty in the diagnosis of the condition and gross variation in the phenotypic presentation of the syndrome. There are very few case reports describing adolescents with Kallmann syndrome with delayed puberty and bimanual synkinesis. CASE REPORT A 17-year-old boy presented with short stature, absence of secondary sexual characters and inadequate weight gain. Birth history was normal and there was no developmental delay in milestones. Both parents and one brother achieved puberty at normal ages and have normal height. On examination, patient was thin built with normal vitals. No pallor, icterus, clubbing, cyanosis and lymphadenopathy were noted. He had a nasal voice, lipomastia and did not have axillary and pubic hair. Testicular volume was 1ml in both testes. Stretched penile length was 4.3cm. He also had bimanual synkinesis. He had a weight of 42 kg (Z score -1.35), height of 156 cm (Z score -0.16), MPH of 162.5 cm (within target range), arm span/height ratio -1.01 (No eunachoid habitus), BMI of 16.4kg/m 2 (Z score -1.42) and bone age of 13 years 6 months (Greulich and pyle's atlas method). Systemic examination was normal. He also had hyposmia, which the parents and the boy were unaware of and which was verified by testing for different types of smells in each nostrils with Department of Paediatrics, Bai Jerbai Wadia children hospital, Mumbai eyes closed. He could identify odors from lemon, perfume and scented soap but couldnt identify odors from coffee, turmeric and alcohol sanitizer. On laboratory investigations, complete haemogram, serum electrolytes, liver and renal function tests were normal. Serum Testosterone (14.48 ng/dL), FSH (0.87 IU/dL) and LH (0.14 IU/dL) were prepubertal level. Cortisol (14.12 g/dL), Prolactin (16 ng/mL), DHEAS (342.3 g/dL) and Thyroid function test were normal. Vitamin D level (10.3ng/mL) was low. Echocardiography, ultrasonography abdomen, audiometry, ophthalmological examination was normal. Patient was found to have inappropriately low BMD for age on bone densitometry scan (L1 to L4 spine and total body Z score of-4 and -2.9 respectively). MRI brain with pituitary cuts revealed absent olfactory bulbs and hypoplastic olfactory sulci (Figure 1). Pituitary gland was normal. Figure 1: Magnetic resonance imaging of brain (T2) showing absence of bilateral olfactory bulb and hypoplastic olfactory sulcus on coronal section Based on the characteristic clinical and radiological findings, patient was suspected to have Kallmann syndrome and genetic test was done which revealed pathogenic variant, hemizygous, X linked recessive with microdeletions in the Xp22.31 region (KAL1 gene). He was started on Testosterone depot injection initially with a dose of 50mg/month intramuscular for 3 months, slowly increased and later to 250mg/month and Vitamin D supplementation. He has been counselled that he will develop secondary sexual characters after testosterone treatment but fertility won't be restored. He will require further LH and FSH based therapy to restore his fertility. On regular follow up, the patient was found to have developed pubic hair after 1 year of testosterone injections. DISCUSSION Kallmann syndrome is a rare genetic disorder characterized by Hypogonadotropic Hypogonadism with anosmia or hyposmia resulting from agenesis or hypoplasia of the olfactory lobes or sulci, or both. It is associated with Gonadotropin releasing hormone (GnRH) deficiency, characterized by complete or partial absence of any endogenous GnRH-induced LH pulsations [1]. GnRH neurons usually migrate along olfactory axons. In the absence of olfactory bulbs, this migration is disrupted leading to hypogonadotropic hypogonadism. This clinical condition was first reported by Maestre de San Jaun, a Spanish anatomist in 1856 [2]. Later, in 1944, an American geneticist, Kallmann reported a study of hypogonadism and anosmia occuring in three families. It affects 1 in 30000 males and is five times less common in females [3]. Modes of inheritance reported are autosomal dominant, autosomal recessive and X linked recessive. Five genes have been identified namely KAL1, FGFR 1, PROKR2, PROK2 and FGF8 [4]. Signs and symptoms can be split into two different clinical categories -Reproductive features involve failure to achieve puberty, small penis, small testes, primary amenorrhoea, poorly defined sexual characters and infertility. Non reproductive features involve anosmia, hyposmia, cleft palate, cleft lip, choanal atresia, icthyosis, seizure disorder and neurosensory hearing loss. Unilateral or rarely bilateral renal agenesis or aplasia, horseshoe kidneys and mirror movements of the hands (synkinesia) are limited to X-linked form [1] [5]. According to the presence of certain accompanying clinical features, genetic screening for particular genes may be prioritized: Synkinesis (KAL1), dental agenesis (FGF8/FGFR1), bony anomalies(FGF8/FGFR1), and hearing loss(CHD7) [6]. Patients with KAL1 gene have a significantly higher prevalence of Synkinesia (43%) compared with non KAL1 gene patients (12%) [7]. KAL1 is located at Xp22.3 and is the most common mutated gene causing Kallmann syndrome in 10% of patients [8]. This gene encodes for anosmin-1 which is an embryonic component of the extracellular matrix and is involved in GnRH induced olfactory neurons migration from the olfactory placode to the hypothalamus during embryonic life. Mutations in KAL1 usually induce severe reproductive phenotypes including absent puberty and high frequency of cryptorchidism or microphallus [9]. Serum testosterone, Luteinizing hormone and follicle stimulating hormone levels are usually low. MRI scan of brain shows a hypoplastic olfactory sulcus with absence of olfactory bulb in most of the cases. Testosterone is given in males to restore virilization and secondary sexual characters as part of replacement therapy. In females combined estrogen and progesterone are used. Pulsatile treatment with Gonadotropin Releasing Hormone is usually used to restore fertility. Reversal of symptoms have been reported in between 10% to 22% of cases (Except anosmia) [7] The article highlights how early diagnosis and timely intervention is the key in management of Kallmann syndrome patients and thus can restore secondary sexual characters and fertility and save them from a lot of health problems.
2021-09-28T01:10:11.499Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "fc942bb00e30936c2f77b0cf090bdda3e151ef87", "oa_license": "CCBY", "oa_url": "http://sljm.sljol.info/articles/10.4038/sljm.v30i1.279/galley/236/download/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8e4586b07e7cb5a7c88f5b221a2134c6b2d87887", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7402728
pes2o/s2orc
v3-fos-license
Soluble ectodomain of c-erbB-2 oncoprotein in relation to tumour stage and grade in human renal cell carcinoma. The soluble ectodomain of c-erbB-2 oncoprotein was measured using a sandwich enzyme immunoassay in sera from 184 patients with renal cell carcinoma before initiation of treatment. The median serum level was 2062 U ml(-1) (range 865-4905 U ml(-1)). Levels were unaffected by sex, age and renal function. An inverse relation between disease stage (P = 0.0017) and tumour grade (P = 0.0009) and the serum level of c-erbB-2 ectodomain was observed. Survival time for patients with serum levels above median level was significantly longer than for patients with lower levels (P = 0.003). In a multivariate analysis, c-erbB-2 oncoprotein lost its prognostic information, while tumour stage and tumour grade were identified as independent prognostic factors. The c-erbB-2 proto-oncogene, also named HER-2/neu, is situated on chromosome 17 and encodes a transmembrane protein of 185 kDa (Schechter et al, 1985). This protein demonstrates structural similarities with the epidermal growth factor (EGF) receptor, with an extracellular glycosylated domain, a transmembrane domain and an intracellular domain with tyrosine kinase activity . Amplification and overexpression of c-erbB-2 has been reported in different types of malignant tumours (Yokota et al, 1986;Venter et al, 1987) and especially in breast and ovarian cancer, oncogene overexpression may predict prognosis (Slamon et al, 1987(Slamon et al, , 1989Tandon et al, 1989). In renal cell carcinoma, the expression of c-erbB-2 has been analysed, and Yokota et al (1986) demonstrated amplification in one of four tumours using Southern blot hybridization. Yao et al (1988), however, found no expression using Northern blot analysis. Using the same method, Freeman et al (1989), Weidner et al (1990) and Rotter et al (1992) all found lower expression of c-erbB-2 mRNA in tumour tissue than in non-neoplastic kidney tissue, while Stumm et al (1996) found frequent overexpression of erbB-2 mRNA using in situ hybridization. Herrera (1991) demonstrated overexpression of c-erbB-2 in paraffin-embedded tumours using immunocytochemistry, and Stumm et al (1996) found high levels in 22 of 34 fresh-frozen tumours. In human breast cancer cell lines, the extracellular domain of c-erbB-2 protein is shed from the surface (Mori et al, 1990;Zabrecky et al, 1991), and the soluble protein fragment can be quantified by means of immunological methods (McKenzie et al, 1989). Serum levels of this ectodomain have been analysed mostly in breast cancer patients (Mori et al, 1990;Camey et al, 1991;Leitzel et al, 1992), and Kandl et al (1994) have demonstrated its prognostic value. The aim of the present study was to evaluate the serum levels of the soluble ectodomain of c-erbB-2 oncoprotein in renal cell carcinoma in relation to clinicopathological parameters and to the clinical course of disease. MATERIALS AND METHODS Patients One hundred and eighty-four patients with histologically verified renal cell carcinoma were included in the study. The patients were admitted to the Department of Urology, University Hospital in Umea, from 1982 to 1994. There were 112 male and 72 female patients, and their median age was 66 years (range 25-85 years). The patients had a clinical examination including chest radiography, computerized tomography or ultrasonography of the abdomen. In case of symptoms, bone scintigraphy was performed. One hundred and seventy-three patients were operated with radical nephrectomy, three with partial resection and eight patients had palliative treatments with medroxyprogesterone, arterial occlusion or interferon because of advanced disease. The patients were staged according to Robson et al (1969), and tumour grade was assessed according to Skinner et al (1971) on a four-grade scale. Tumour size was measured on the surgical specimen or by computerized tomography. During the study, 93 patients died of renal cell carcinoma and 23 of intercurrent diseases. At the time of follow-up, 68 patients were alive, three with verified tumour relapse. The median follow-up time of these patients was 65 months (range 3-149 months). Sera from 23 patients with renal cysts were analysed and used as clinical control. C-erbB-2 analysis Serum samples were taken after patients' informed consent and before initiation of therapy and stored at -80°C. C-erbB-2 was analysed in duplicate using a commercial enzyme-linked immunosorbent assay neuAssay (QIA 10) from Oncogene Science, Uniondale, NY, USA. Statistics For statistical calculations the Mann-Whitney, the Jonckheere-Terpstra and Fisher's exact tests were used (Sprent, 1989). Survival analyses were performed according to the Kaplan-Meier method using the log-rank test. Multivariate analysis of prognostic factors were performed according to Cox's proportional hazard model. RESULTS The soluble ectodomain of c-erbB-2 oncoprotein was assessed in serum from 184 patients with renal cell carcinoma. The median value, 2062 U ml-' (range 865-4905 U ml-'), was significantly lower than that of 23 patients with renal cysts (median 2524; P = 0.0014). After subdivision according to disease stages (Table 1), a significant inverse relation between ectodomain level and stage was observed (P = 0.0017, Jonckheere-Terpstra test). A similar inverse relation was observed between serum levels and tumour grade (P = 0.0009). The yearly variation from 1982-94 was analysed, and no trend towards increase or decrease of the levels were found, indicating that the soluble ectodomain was stable during storage (data not shown). No difference between the levels in male or female patients was observed. Nor was there any significant difference when the patients were subdivided according to age or renal function assessed as serum creatinine, as shown in Table 2. Survival time was compared between patients with c-erbB-2 above and below the median value (2060 U 1-l), as shown in the Figure. Prognosis was significantly better for patients with higher levels than for those with lower levels (P = 0.003, log-rank test). When survival was analysed in different disease stages separately, the same tendency was observed in stage I disease (P = 0.047). Patients with c-erbB-2 above median had a significantly higher survival rate and longer survival time when compared with those with lower concentrations. For patients with stage II-III and stage IV disease no such difference could be observed. No difference in age or gender ratio was found when all patients with c-erbB-2 levels above median were compared with those with c-erbB-2 levels below median. There was, however, significant differences in disease stage, tumour diameter and outcome as shown in Table 2. Multivariate analysis The prognostic value of age, gender, disease stage, tumour grade and soluble ectodomain of c-erbB-2 protein level was assessed in a multivariate analysis using the Cox method. As shown in Table 3, disease stage and tumour grade were independent predictors of prognosis. DISCUSSION In the present study the extracellular domain of the c-erbB-2 oncoprotein in sera from patients with renal cell carcinoma was analysed. The c-erbB-2 oncogene product is a receptor-like structure homologous to the EGF receptor. Press et al (1990) identified this oncoprotein immunohistochemically on the membranes of most normal epithelial cellsstronger in human fetal tissues, weaker in adult tissues. The oncogene product is hence expressed on the normal cell membrane and is probably involved in cell proliferation. The c-erbB-2 oncogene has been extensively evaluated in breast cancer, in which about 30% of the tumours show overexpression (Lupu et al, 1995). In renal cell carcinomas, on the other hand, the c-erbB-2 oncogene has only been analysed in a limited number of tumours. Yokota et al (1986) found gene amplification in one of four renal cell carcinomas using Southern blot analysis, while Freeman et al (1989), Weidner et al (1990) and Stumm et al (1996) were unable to detect any amplification of the c-erbB-2 oncogene. The transcript of the c-erbB-2 oncogene has been analysed using Northern blot analysis in renal cell carcinoma by Yao et al (1988), who found no expression in 16 tumours. Weidner et al (1990) and Rotter et al (1992) found lower mRNA expression in tumour than in normal renal tissue. Freeman et al (1989) also found lower mRNA expression in tumour than in normal renal tissue using dot blot analysis, while Stumm et al (1996) found high or moderate expression in 29 of 34 tumours using in situ hybridization. Weidner et al (1990) related the results of the Northern blot analysis with tumour grade and were unable to find any correlation. Rotter et al (1992), however, found a non-significant inverse relation between the c-erbB-2 oncoprotein level and tumour grade. Taken together, these results indicate that amplification of the c-erbB-2 oncogene is a rare event in renal cell carcinoma. mRNA expression assessed with different methods seems to be variable, possibly because of the limited number of tumours analysed. The results of the present study indicate lower serum levels of soluble ectodomain in more advanced stages and grades of renal cell carcinoma. Whether this is because of lower production, diminished shedding or possibly an increased metabolism of the oncoprotein fragment is uncertain. The c-erbB-2 oncoprotein expression has previously been studied in a limited number of renal cell carcinomas. Herrera (1991), in an analysis on cystic renal disease using immunohistochemistry on formalin-fixed paraffin-embedded material, found overexpression of the c-erbB-2 oncoprotein in two out of five renal cell carcinomas. No correlation with disease stage or tumour grade was presented. Stumm et al (1996) found high levels of c-erbB-2 oncoprotein expression in 64% of fresh-frozen tumours using immunohistochemistry, but the relation to stage and grade was uncertain. In the present study, an inverse relation between tumour grade, disease stage, survival time and the serum level of c-erbB-2 oncoprotein was observed. Our results are in line with previous studies in colonic and ovarian cancer. Cohen et al (1989), using cell lines from colonic cancers, found lower c-erbB-2 expression in poorly differentiated tumours than in more differentiated tumours. McKenzie et al (1993) analysed soluble ectodomain of c-erbB-2 oncoprotein in ovarian cancer and found significantly lower levels in more advanced disease stages and a tendency towards lower levels in poorly differentiated tumours. In breast cancer, the c-erbB-2 oncogene expression was increased in more advanced disease stages and in poorly differentiated tumours (Slamon et al, 1987;Lupu et al, 1995), findings that are opposed to the results of the present study. Variable results have been presented in other studies of breast cancer in which expression was found to be at a higher frequency in ductal carcinoma in situ tumours than in invasive tumours (van de Vijver et al, 1988;Allred et al, 1992). Univariate analysis of the prognostic value of the soluble ectodomain of c-erbB-2 in the present study shows that the level was inversely related to survival time. This result is opposed to the findings in breast cancer, in which overexpression of c-erbB-2 oncoprotein is a negative prognostic factor (Slamon et al, 1987;Tandon et al, 1989;Kandl et al, 1994). When prognosis was evaluated in a multivariate analysis in renal cell carcinoma, the strong predictors were stage and grade in accordance with earlier reports (Thrasher and Paulson, 1993), while c-erbB-2 oncoprotein lost its independent prognostic value. In conclusion, an inverse relation between serum levels of the soluble ectodomain of c-erbB-2 oncoprotein and disease stage, tumour grade and survival time in renal cell carcinoma was found.
2014-10-01T00:00:00.000Z
1997-01-01T00:00:00.000
{ "year": 1997, "sha1": "acc7b3e54ad63878506f7962933fdd9d1036306d", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2223523?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "acc7b3e54ad63878506f7962933fdd9d1036306d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14215946
pes2o/s2orc
v3-fos-license
Multivariable backward-shift-invariant subspaces and observability operators It is well known that subspaces of the Hardy space over the unit disk which are invariant under the backward shift occur as the image of an observability operator associated with a discrete-time linear system with stable state-dynamics, as well as the functional-model space for a Hilbert space contraction operator. We discuss two multivariable extensions of this structure, where the classical Hardy space is replaced by (1) the Fock space of formal power series in a collection of $d$ noncommuting indeterminates with norm-square-summable vector coefficients, and (2) the reproducing kernel Hilbert space (often now called the Arveson space) over the unit ball in ${\mathbb C}^{d}$ with reproducing kernel $k(\lambda, \zeta) = 1/(1 -<\lambda, \zeta>)$ ($\lam, \zeta \in {\mathbb C}^{d}$ with $\| \lambda \|, \| \zeta \|<1$). In the first case, the associated linear system is of noncommutative Fornasini-Marchesini type with evolution along a free semigroup with $d$ generators, while in the second case the linear system is a standard (commutative) Fornasini-Marchesini-type system with evolution along the integer lattice ${\mathbb Z}^{d}$. An abelianization map (or symmetrization of the Fock space) links the first case with the second. The second case has special features depending on whether the operator-tuple defining the state dynamics is commutative or not. The paper focuses on multidimensional state-output linear systems and the associated observability operators. Introduction For U and Y any pair of Hilbert spaces, we use the notation L(U, Y) to denote the space of bounded, linear operators from U to Y. For X a single Hilbert space, we shorten the notation L(X , X ) to L(X ). Let X , U and Y be Hilbert spaces, let A ∈ L(X ), B ∈ L(U, X ), C ∈ L(X , Y) and D ∈ L(U, Y) be bounded linear operators, and let us consider the associated discrete-time linear time-invariant system x(n + 1) = Ax(n) + Bu(n) y(n) = Cx(n) + Du(n) (1.1) with x(n) taking values in the state space X , u(n) taking values in the input-space U and y(n) taking values in the output-space Y. If we let the system evolve on the nonnegative integers n ∈ Z + , then the whole trajectory {u(n), x(n), y(n)} n∈Z+ is determined from the input signal {u(n)} n∈Z+ and the initial state x(0) according to the formulas is the transfer function of the system Σ given by (1.1). In particular, if the input signal {u(n)} n∈Z+ is taken to be zero, the resulting output {y(n)} n∈Z+ is given by y = O C,A x(0). In case O C,A is bounded as an operator from X into ℓ 2 Y := ℓ 2 ⊗ Y (here ℓ 2 is the space of square-summable complex sequences indexed by the nonnegative integers Z + , we say that the pair (C, A) is output-stable. It is convenient to represent O C,A in the output-stable case in the matrix form where H 2 , the image of ℓ 2 under the Z-transform, is the space of analytic functions on the unit disk with modulus-square-summable sequence of Taylor coefficients: |f n | 2 < ∞}, the output stability of (C, A) is equivalent to the Z-transformed version of the observability operator (1.5) being bounded as an operator from X into H 2 Y , It is readily seen that O C,A x = O C,A x. If (C, A) is output-stable, then the observability gramian is bounded on X and can be represented via the series A * n C * CA n (1. 6) converging in the strong operator topology. The following result gives a summary of well-known connections between output stability, observability gramians and solutions of associated Stein equations and inequalities. has a positive semidefinite solution H ∈ L(X ). (2) If (C, A) is output-stable, then the observability gramian G C,A satisfies the Stein equality H − A * HA = C * C (1.8) and is the minimal positive semidefinite solution of the Stein inequality (1.7). (3) There is a unique positive semidefinite solution of the Stein equality (1.8) if A is strongly stable, i.e., powers A n of A tend to zero in the strong operator topology of L(X ). If A is a contraction operator, then the positive semidefinite solution of the Stein equation (1.8) is unique if and only if A is strongly stable. A pair (C, A) is called observable if the operator O C,A (equivalently, O C,A , G C,A ) is injective. This property means that a state space vector x ∈ X is uniquely recovered from the output string {y(n)} ∞ n=0 generated by running the system (1.1) with the zero input string and the initial condition x(0) = x. A pair (C, A) is called exactly observable if O C,A (equivalently, G C,A ) is bounded and bounded from below. Associated with an output-stable pair (C, A) is the range of the observability operator Ran O C,A = {C(I − zA) −1 x : x ∈ X }. The following theorem summarizes the connection between such ranges and backward-shift-invariant subspaces of H 2 Y . Theorem 1.2. Suppose that (C, A) is an output-stable pair. Then: (1) The linear manifold Ran O C,A is invariant under the backward shift operator (1.9) (2) Let H ≥ 0 be a solution of the Stein inequality (1.7) and let X ′ be the completion of X with inner product [x] 2 X ′ = Hx, x X (where [x] denotes the equivalence class modulo Ker H generated by x). Then A and C extend to define bounded operators A ′ : X ′ → X ′ and C ′ : X ′ → Y and the observability operator O C,A extends to define a contraction operator O C ′ ,A ′ from is an isometry if and only if H satisfies the Stein equation (1.8) and A ′ is strongly stable, i.e., HA n x, A n x → 0 for all x ∈ X . To define the Fock space, we let F d denote the free semigroup on the set {1, . . . , d} of the first d natural numbers and then let H 2 Y (F d ) consist of the space of all formal power series v∈F d f v z v in d noncommuting indeterminates z = (z 1 , . . . , z d ) with coefficients f v in a coefficient Hilbert space Y which are square-summable in norm: The shift operator S : f (λ) → λf (λ) acting on the Hardy space H 2 is replaced by the noncommuting d-tuple S = (S 1 , . . . , S d ) on H 2 Y (F d ) given by S j : f (z) → f (z)z j for j = 1, . . . , d. (1.12) The system (1.1) is replaced by a noncommutative multidimensional input-stateoutput system of the form (1.13) Here the system evolves along the free semigroup F d , and, for each v ∈ F d , the state vector x(v), input signal u(v) and output signal y(v) take values in the state space X , input space U and output space Y, and the system matrix U has the form (1.14) Such systems were introduced in [16] and with further elaboration in [11] and [12]; following [11] we call this type of system a noncommutative Fornasini-Marchesini linear system. The observability operator associated with an output map C : X → Y and a d-tuple A = (A 1 , . . . , A d ) of not necessarily commuting operators on a Hilbert space X , expressed in "frequency-domain" coordinates, takes the form For the particular case where A is a row contraction and C = (I − A * 1 A 1 − · · · − A * d A d ) 1/2 with Y taken to be equal to the closure of the range of C, this operator appears already in work of Popescu [48] under the term "Poisson kernel" and as the adjoint of the key operator L used in many constructions in the paper of Arveson [8]. Reproducing kernel Hilbert spaces consisting of formal power series were developed in a systematic way in [15]. Such spaces already appear (although not quite in our notation) in the Sz.-Nagy-Foiaş model theory for row contractions developed by Popescu (see [41,42,43]). We shall see that Theorems 1.1 and 1.2 extend in a natural way to this setting, where the observability gramian (1.6) in the statement of Theorem 1.1 is replaced with the multivariable observability gramian where the backward shift S * (1.9) in the statement of Theorem 1.2 is replaced by the d-tuple S * = (S * 1 , . . . , S * d ) of adjoints of the shift operators S j in (1.12), and where the positive kernel (1.10) becomes the kernel 16) in two sets z = (z 1 , . . . , z d ) and w = (w 1 , . . . , w d ) of noncommuting indeterminates (see Theorems 2.2 and 2.8 below). In the second Arveson-space setting, the Hardy space H 2 over the unit disk is replaced by the so-called Arveson (see [28,7]). In this case the underlying system evolves along the integer lattice Z d + = (n = (n 1 , . . . , n d ) : n j ∈ Z + } has the form of what we call a (commutative) Fornasini-Marchesini system    x(n) = A 1 x(n − e 1 ) + · · · + A d x(n − e d ) +B 1 u(n − e 1 ) + · · · + B d u(n − e d ) y(n) = Cx(n) + Du(n). (1. 18) Here and in what follows, e j denotes the element in Z d + having the j-th partial index equal to one and all other partial indices equal to zero: e j = (0, . . . , 0, 1, 0, . . . , 0) ∈ Z d + . (1.19) Thus the system matrix U has the same form (1.14) as for the noncommutative setting but the domain for all the signals and the system evolution is the integer lattice Z d + rather than the free semigroup F d and the associated "frequencydomain" objects are functions or formal power series in the commuting variables λ = (λ 1 , . . . , λ d ) rather than in the noncommuting indeterminates z = (z 1 , . . . , z d ). In Section 3, we show how the Arveson space H(k d ) ⊗ Y and this Fornasini-Marchesini linear system can be derived as an abelianization (sometimes also called symmetrization) of the noncommutative Fock space H 2 Y (F d ) and of the noncommutative Fornasini-Marchesini linear system, respectively; while it is well known that the Arveson space is a symmetrization of the Fock space and that the multiplier algebra on the Arveson space is the image under a completely positive map acting on the noncommutative multiplier algebra on the Fock space (see [7,6,25,26] and [49,50] for a recent, more general systematic framework), our extension of these ideas to the underlying system theory appears to be new. The observability operator, as in the noncommutative setting, is associated with a so-called output pair (C, A) but now has the form x where the variables λ 1 , . . . , λ d commute and the abelianized observability gramian G a C,A = ( O a C,A ) * O a C,A has an infinite-series representation more complicated than the second expression in (1.15) (see equation (3.10) below). In case the operator Proposition 3.3 below), and Theorem 1.2 has a natural analogue for this setting, with the abelianized multivariable observability gramian G a C,A = G C,A playing the role of the observability gramian in Theorem 1.1, with the operator d-tuple M * λ = (M * λ1 , . . . , M * λ d ) in place of the backward shift S * (1.9) in Theorem 1.2, and with kernel (1.10) now taken to be the multivariable positive kernel Theorems 3.14, 3.15 and 3.16 below). In the general case where the operator dtuple A = (A 1 , . . . , A d ) is not assumed to be commutative, there is no characterization of the abelianized observability gramian as a minimal solution of a generalized Stein equation analogous to the classical case given in Theorem 1.1, but there still is a somewhat more implicit analogue of Theorem 1.2, where the backward shift S * (1.9) in Theorem 1.2 is replaced by a solution of the so-called Gleason problem (see Theorems 3.20 and 3.21 below). The Gleason problem originates in the work of Henkin and Gleason (see [31,36]) and has been studied in the context of the Arveson space (with various formulas for the solution) in [3] with an application to realization questions in [2]. Our analogue of Theorem 1.2 for the Arveson space for the case of commutative d-tuple A has already been given in [19] (with a more general power-series setting worked out in [20]) for the finite-dimensional case. We also give various numerical examples (constructed with the aid of the software program MATHEMATICA) to illustrate how O C,A and O a C,A can have divergent properties when A is not commutative (see Examples 3.4, 3.9 and 3.11 below). Backward-shift-invariant subspaces for the classical setting have been used for some time in the operator-theory literature as the model space for a more general (abstractly defined) Hilbert-space contraction operator (see [21,40]); connections of this work with linear system theory were only realized later (see e.g. [34,35]). Our results develop the structure of such model spaces for the case of operatortuples and therefore are of interest from the point of view of multivariable operator theory. We find it satisfying that these model spaces in turn tie in with the theory of multidimensional linear systems in much the same way (but with some surprises) as in the classical case. As applications of the ideas, we obtain new system-theoretic derivations of the Beurling-Lax representation theorem for shift invariant subspaces in both the noncommutative and commutative settings; the result for the noncommutative setting is due originally to Popescu [44] and for the commutative setting to Arveson [7] and McCullough-Trent [38]. We also indicate connections with dilation theory and the von Neumann inequality for these settings (see [45,48,28,7]). Closely related to the kernels K C,A (z, w) and K a C,A (λ, ζ) (given by (1.16) and (1.20) with H normalized to be the identity operator) are kernels of de Branges-Rovnyak type (see [21] for the classical case) . . , z d ) and w = (w 1 , . . . , w d ) are two sets of noncommuting indeterminates with k Sz (z, w) = v∈F d z v w v ⊤ equal to the noncommutative Szegö kernel while λ = (λ 1 , . . . , λ d ) and ζ = (ζ 1 , . . . , ζ d ) are two sets of commuting variables) for respective reproducing kernel Hilbert spaces H(K S ), H(K a S ) in the respective noncommutative and commutative settings. In this situation (where K S and K a S are positive kernels in noncommuting and commuting variable, respectively), the respective power series are contractive multipliers, i.e., the respective multiplication operators respectively with norm at most 1. A particular issue is the construction of operators B j : U → X for j = 1, . . . , d and D : U → Y for some input space U so that . With the resolution of this issue, then the results here lead directly to representations of backward-shift-invariant subspaces as reproducing kernel Hilbert spaces of the form H(K S ) and H(K a S ) for a Schur multiplier S in both the noncommutative and commutative settings as well as linear-fractional realizations for Beurling-Lax representers of shift-invariant subspaces for both the noncommutative (see [44]) and commutative (see [38]) settings. We work out these issues for the commutative setting and for the noncommutative setting in [9] and [10] respectively. The paper is organized as follows. After the present Introduction, Section 2 focuses on the noncommutative Fock space setting while Section 3 focuses on the Arveson-space setting. Section 2 is divided into Section 2.1 dealing with the connections between solutions of generalized Stein equations and strong stability of the state dynamics for noncommutative Fornasini-Marchesini systems and Section 2.2 dealing with characterizing ranges of observability operators as backward-shiftinvariant subspaces of the Fock space with a certain reproducing-kernel-Hilbertspace structure. The first subsection (Section 3.1) of Section 3 deals with the less tractable issues parallel to the material in Section 2.1 of generalized Stein equations and stability for commutative Fornasini-Marchesini systems and also presents the abelianization map giving the connection between noncommutative and commutative Fornasini-Marchesini systems. The second subsection (Section 3.2) of Section 3, parallel to Section 2.2, discusses characterizations of observability-operator ranges for the case of a commutative Fornasini-Marchesini state-output system. The results are the most satisfying in case the operator-tuple A giving the state dynamics is commutative-these are collected in Subsection 3.2.1. The more implicit results for the case of noncommutative A are given in Subsection 3.2.2. 2. The Fock-space setting 2.1. Output stability and Stein equations: the noncommutative case. For d a positive integer, let F d be the free semigroup F d generated by the set of d letters {1, . . . , d}. Elements of F d are words of the form i N · · · i 1 where i ℓ ∈ {1, . . . , d} for each ℓ = 1, . . . , N with multiplication given by concatenation. We also use ∅ to denote the empty word; this serves as the unit element for F d . For v = i N i N −1 · · · i 1 ∈ F d , we let |v| denote the number N of letters in v and we let v ⊤ := i 1 · · · i N −1 i N denote the transpose of v. We let z = (z 1 , . . . , z d ) to be a collection of d formal noncommuting variables and let Y z denote the set of formal noncommutative If we let χ v be the characteristic function of the word v, so can be identified as the tensor product ℓ 2 (F d ) ⊗ Y and is mapped unitarily onto the space with the monomials z v playing the role of the basis vectors χ v . The noncommutative multidimensional analogue of the system (1.1) is the system with evolution along the free semigroup F d given by (1.13). Upon running the system (1.13) with the zero input string u(v) = 0 for v ∈ F d and a fixed initial condition x(∅) = x ∈ X we get (2.5) Here we extend the noncommutative functional calculus (2.1) from noncommuting indeterminates z = (z 1 , . . . , z d ) to a d-tuple of operators A = (A 1 , . . . , A d ); we use the notation where the multiplication is now operator composition. Application of the formal noncommutative Z-transform (2.4) then gives where the formal power series T Σ (z) (by definition equal to the transfer function of the system (1.13)) is given by where we have set For details see [16] or, for a more general setting of structured noncommutative multidimensional systems, see [11]. In analogy to the classical case, the system (1.13) is called output-stable (and in this case we will say that the pair (C, A) is output-stable) if the output string {y(v)} v∈F d , defined as in (2.5) but with the input string {u(v)} v∈F d assumed to be equal to 0, belongs to ℓ 2 Y (F d ) for every x ∈ X and the observability operator is bounded as an operator from X into ℓ 2 and the following realization formula for O C,A is immediate: and is bounded. In this case it makes sense to introduce the observability gramian and its representation in terms of strongly converging series A v x 2 → 0 for all x ∈ X . (2.11) We mention that the term pure rather than strongly stable has been used in this context (see [8]), but we prefer the present terminology since pure so as to avoid confusion with the use of the term pure in the context of contractive operator-valued functions (see [40]). In analogy with the classical case one can introduce the unobservable subspace Thus, observability of (C, A) means that Ker G C,A is the zero subspace or that The following is the noncommutative Fock-space counterpart to Theorem 1.1. has a positive semidefinite solution H ∈ L(X ). In this case, (1) The observability gramian G C,A satisfies the generalized Stein equation and is the minimal positive semidefinite solution of the generalized Stein inequality (2.14). 15) is unique if A is strongly stable, i.e., (2.11) holds. Moreover, in case A is contractive in the sense that Proof. Suppose first that (C, A) is output-stable. Then for each x ∈ X , This has the consequence that the infinite series converges in the strong operator topology to an operator H ∈ L(X ) (in fact, H = G C,A is the observability gramian). From this infinite-series representation for G C,A it is easily verified that G C,A is positive semidefinite and satisfies the Stein equation (2.15) and hence also the Stein inequality (2.14). Conversely, suppose that the Stein inequality (2.15) has a positive semidefinite solution H. We first claim that for each N ∈ Z + . For N = 0, (2.17) collapses to (2.14) which is given. Inductively assume that Use the Stein inequality (2.14) to replace H on the right side by its lower bound which then simplifies to (2.17) as wanted. We rewrite (2.17) in the form and passing to the limit in (2.18) as N → ∞ gives G C,A ≤ H. In particular the operator G C,A is bounded (since H is) and therefore the pair (C, A) is outputstable. Proof of (1): As observed in the proof of the first part of the theorem, from the infinite-series representation (2.10) it follows that G C,A satisfies the Stein equation (2.15). If H is any solution of the Stein inequality, the computation leading to (2.18) shows that H satisfies (2.18). By taking the limit as N → ∞ we conclude that G C,A ≤ H as asserted. Proof of (2): Suppose that A is strongly stable and that H solves the Stein equation (2.15). Then the proof of (2.17) shows that in this case (2.17) holds with equality: for each N = 0, 1, 2, . . . . Taking the limit as N → ∞ and using the stability assumption (2.11) we conclude that H = G C,A . For the converse direction we assume in addition that A is contractive (i.e., (2.16) holds). We prove the contrapositive: if A does not satisfy the stability condition (2.11), then the solution of the Stein equation (2.15) is not unique. Assume therefore that A is not stable. By the assumption (2.16), the sequence of operators is decreasing and bounded below and therefore has a strong limit ∆. Since A is assumed not to be stable, this limit ∆ is not zero. However it is easily verified that Taking limits in (2.20) gives that ∆ = lim N →∞ ∆ N satisfies the homogeneous Stein equation We conclude that the solution of the Stein equation (2.15) cannot be unique. Particular cases of output pairs (C, A) are the cases where (C, A) is contractive (i.e., the Stein inequality (2.14) holds with H = I X ) and where (C, A) is isometric (i.e., the Stein equality (2.15) holds with H = I X ). For these cases some additional observations can be made along the lines of Theorem 2.2. Proposition 2.3. (1) Suppose that (C, A) is a contractive pair. Then (C, A) is output-stable with G C,A ≤ I X and the observability gramian G C,A is the unique positive semidefinite solution of the Stein equation (2.15) if and only if A is strongly stable. (2) Suppose that (C, A) is an isometric pair. Then (C, A) is output-stable. Moreover H = I is the unique solution of the Stein equation (2.15) if and only if A is strongly stable. In this case O C,A is isometric and hence also (C, A) is exactly observable. Proof. Statement (1) immediately follows from statements in Theorem 2.2 combined with the observation that (C, A) being a contractive pair implies that A is contractive. The first two assertions in statement (2) follow in a similar way. As for the last assertion, for the case where (C, A) is isometric, I X is a solution of the Stein equation (2.15); for the situation where A is strongly stable, uniqueness implies that the observability gramian G C,A = I X , i.e., that O C,A is isometric. Then also (C, A) is exactly observable by definition. Remark 2.4. The converse of the last part of Proposition 2.3 does not hold even for the case d = 1. More precisely, there exists an isometric pair of operators (C, A) such that (C, A) is observable but A is not strongly stable. An example necessarily requires that dim X = ∞. In the terminology of Sz.-Nagy-Foiaş [40]. it suffices to produce a completely non-isometric (c.n.i.) contraction operator A on a nontrivial Hilbert space X (so dim X > 0 and there is no Such an A is not strongly stable by the definition of the class C 1· , the definition of C makes the pair (C, A) isometric, and the condition that A is c.n.i. implies that (C, A) is observable. To construct such an operator A, let θ be a Schur-class outer function such that log(1 − |θ| 2 ) is not integrable (with respect to arc-length Lebesgue measure) over the unit circle T. Furthermore, let K(θ) be the associated Sz.-Nagy-Foiaş model space and let S(θ) be the Sz.-Nagy-Foiaş model operator where M λ and M ζ are the operators of multiplication by λ and by ζ, respectively. Now we let A := S(θ) * and note that A is in the class C 1· by Proposition 3.5 in [40] (since θ is outer) and A is c.n.i. by Theorem 5 in [13] (since the non-logintegrability property of 1−|θ| 2 implies that there is no H ∞ -function a(z) for which |a(ζ)| 2 ≤ 1 − |θ(ζ)| 2 for ζ ∈ T). This completes the construction. Let us say that the pair (C, A) is similar to the pair ( C, A) if there is an invertible operator S on X so that Then we have the following characterization of pairs (C, A) which are similar to a contractive or to an isometric pair. Proof. Suppose that H is a strictly positive-definite solution of (2.14). Factor H as H = S * S with S invertible and set Multiplying (2.14) on the left by S * −1 and on the right by S −1 then leads us to is a contractive pair which is similar to the original pair (C, A). Conversely, if ( C, A) given by (2.21) is contractive, then H = S * S is bounded and positive-definite and satisfies the Stein inequality (2.14). This verifies the first statement of the Proposition. The second statement follows in a similar way. As a consequence of the observations in Proposition 2.5, Proposition 2.3 can be formulated more generally as follows. (1) If the pair (C, A) is such that the Stein inequality (2.14) has a strictly positive-definite solution H, then (C, A) is output-stable. Moreover, the observability gramian G C,A is the unique positive semidefinite solution of the Stein equation (2.15) if and only if A is strongly stable. The last part of Proposition 2.6 has a converse. Proposition 2.7. Suppose that the pair (C, A) is output-stable and exactly observable. Then A is strongly stable, i.e., (2.11) holds. Proof. If (C, A) is output-stable and exactly observable, then the observability gramian G C,A is a strictly positive-definite solution of the Stein equation (2.15). Hence (2.19) holds with H = G C,A : (2.22) From the infinite-series representation (2.10) for G C,A , taking limits in (2.22) gives The strict positive-definiteness of G C,A tells us that there is an ε > 0 so that In particular, from (2.24) with A v x in place of x combined with (2.23) we get for all x ∈ X , and we conclude that A is strongly stable as asserted. 2.2. Observability-operator range spaces and reproducing kernel Hilbert spaces: the noncommutative-variable case. To develop the noncommutative analogue of Theorem 1.2, we first introduce the right noncommutative shift opera- It is readily seen that their adjoints (backward shifts) are given by with adjoints given by In addition to the unitary property τ * = τ −1 of τ , note also that τ intertwines the left shifts with the right shifts: Then we have the following Fock-space analogue of Theorem 1.2. Theorem 2.8. Suppose that (C, A) is an output-stable pair. Then: (1) The intertwining relation holds for every backward-shift operator (S R j ) * defined in (2.26) and hence Let H ≥ 0 be a solution of the Stein inequality (2.14) and let X ′ be the completion of X with H-inner product [x] 2 X ′ = Hx, x X . Then A j and C extend to define bounded operators A ′ j : X ′ → X ′ for j = 1, . . . , d and C ′ : X ′ → Y and the observability operator O C,A extends to define a con- is a solution of the Stein inequality (2.14) and the linear manifold M := Ran O C,A is given the lifted norm Hy, y X , Furthermore, M ′ is isometrically equal to the formal noncommutative reproducing kernel Hilbert space with reproducing kernel K C,A;H given by (1.16). holds for every f ∈ M, then M is contractively included in H 2 Y (F d ) and there exists a contractive pair (C, A) (so H = I positive definite solution of the Stein inequality (2.14)) such that isometrically. In case (2.35) holds with equality, then (C, A) can be taken to be an isometric pair. An explicit (C, A) meeting these conditions is given as follows. Take X to be the Hilbert space X = τ (M) (where τ is the involution given by (2.29)) with τ (f ) X = f M and define C : Proof of (1): Applying (S R j ) * to a typical element from Ran O C,A (with notation as in (2.7)), we get The latter equality shows that Ran O C,A is invariant under (S R j ) * for all j = 1, . . . , d (backward-shift-invariant) and (2.31) follows. Proof of (2): The Stein inequality (2.14) amounts to the statement that (C, A) is contractive and well-defined on the dense subset [X ] of X ′ (where [x] is the equivalence class containing x) and hence extends to a contractive pair (C ′ , A ′ ) on all of X ′ and moreover the inequality (2.17) holds for all N = 1, 2, . . . . From In particular it follows from (2.16) that and hence, by taking the strong limit on the right hand side, we get Proof of (3a): Statement (3a) follows from general principles laid out in [15]. Proof of (3b): We see that then With these substitutions, we see that (2.34) is equivalent to or, in operator-theoretic form, with equality in (2.34) equivalent to equality in (2.41). This completes the verification of part (3b) of Theorem 2.8. Before commencing the proof of part (4) of Theorem 2.8, we collect some useful (2.25) and (2.27) and let the operator E : is unitary, i.e., (where δ ij stands for the Kronecker symbol), and (2.46) (5) The observability operator O E,S R * is equal to the operator τ defined in (2.29) and hence is unitary. Proof and hence, in either the left or the right case, we have Proof of (4) in Theorem 2.8: Suppose that M is a Hilbert space contractively in- and hence H = I X satisfies the Stein inequality (2.14). From Proposition 2.9 we see that As explained by part (4) of Theorem 2.8, for purposes of study of contractivelyincluded, backward-shift-invariant subspaces of H 2 Y (F d ) which satisfy the differencequotient-inequality (2.34), without loss of generality we may suppose at the start that we are working with X ′ as the original state space X and with the solution H of the Stein inequality (2.14) to be normalized to H = I X . Then certain simplifications occur in parts (1)-(4) of Theorem 2.8 as explained in the next result. Theorem 2.10. Suppose that (C, A) is a contractive pair with state space X and output space Y. Then: (1) (C, A) is output-stable and the intertwining relation (2.31) holds. Hence and is isometrically equal to the formal noncommutative reproducing kernel Hilbert space H(K C,A ) with reproducing kernel K C,A (z, w) given by holds for all f ∈ H(K C,A ). Moreover, (2.49) holds with equality if and only the orthogonal projection Q of X onto (Ker O C,A ) ⊥ satisfies the Stein equation In particular, if (C, A) is observable, then (2.49) holds with equality if and only if (C, A) is an isometric pair. Proof. Statements (1)-(3) and all but the last part of statement (4) for all x ∈ X . Using the definition (2.48) of the H(K C,A )-norm, we rewrite this last equality as This holding for all x ∈ X is finally equivalent to the Stein equation (2.50). Remark 2.11. In Theorems 2.8 and 2.10 we could equally well have interchanged the roles of left versus right. For a given output pair (C, A), define the associated Then the linear manifold Ran O L C,A is invariant under the left backward shifts We leave the precise statements and proofs to the interested reader. The characterization (2.50) of the difference-quotient inequality holding with equality for a space H(K C,A ) in Theorem 2.10 can be made more explicit as follows. Proposition 2.12. Suppose that (C, A) is a contractive pair as in Theorem 2.10 and let Q be the orthogonal projection onto and we have the inequalities for j = 1, . . . , d, then Q satisfies the Stein equation (2.50) if and only if the pair (C 0 , A 0 j ) is an isometric pair, in which case we also have that A j2 = 0 (so Proof. First note that Ker O C,A is invariant for each A j and that Ker O C,A ⊂ Ker C. Therefore the matrix decompositions of C, A j , Q with respect to the decomposition X = Ker O C,A ⊕ (Ker O C,A ) ⊥ have the form as given in (2.53). Next note that the contractive property of the pair (C, A) means that On the other hand, the Stein inequality (2.51) works out to be As the left hand side of (2.55) is dominated by the left hand side of (2.54), it is clear that (2.55) follows from (2.54), and hence (2.51) holds as asserted. Since G C,A is the minimal positive semidefinite solution of the Stein inequality (2.14) (by part (2) of Theorem 2.2) and we now know that Q is one such solution, it follows that G C,A ≤ Q. As Q is an orthogonal projection on X , we also have Q ≤ I X and (2.52) now follows. From the (2, 2) entry of (2.54), we read off In particular i.e., Q satisfies (2.51). On the other hand, the validity of (2.50) reduces to or simply to Thus, the validity of (2.50) is equivalent to (C 0 , A 0 ) being an isometric pair, in which case we also have that A j2 = 0. Finally, we have the following uniqueness result. Theorem 2.13. Suppose that (C, A) and ( C, A) are two output-stable, observable pairs realizing the same positive kernel Then (C, A) and ( C, A) are unitarily equivalent, i.e., there is a unitary operator U : X → X such that C = CU and A j = U −1 A j U for j = 1, . . . , d. Proof. For any two words α, β ∈ F d , equating coefficients of z α w β ⊤ in (2.58) gives Hence the operator U defined by extends by linearity and continuity to define an isometry from The observability assumption implies that D U = X and R U = X ; hence U : X → X is unitary. From (2.59) it is easily seen that U C * = C * and U A * j = A * j U for j = 1, . . . , d. Since U is unitary we then get CU = C and A j U = U A j for j = 1, . . . , d and we conclude that (C, A) and ( C, A) are unitarily equivalent as desired. 2.3. Applications of observability operators: the noncommutative setting. As an application we give a proof of the Beurling-Lax theorem for the Fockspace setting originally given by Popescu [44]. We shall in fact prove a more general version of the Beurling-Lax-Halmos theorem for contractively-included (rather than isometrically included) subspaces of H 2 Y (F d ) due in the classical setting to de Branges (see [21]). Our proof is similar to that in [44] but highlights more explicitly the role of an associated observability operator. For this purpose we say that a formal power series θ(z) = v∈F d θ v z v ∈ L(U, Y) z is a contractive multiplier, also written as θ is in the d-variable, noncommutative Schur-class S nc,d (U, Y), if the operator M θ of multiplication by θ with operator norm at most 1. Such a formal power series θ(z) is said to be inner if moreover the operator is an isometry 1 . Theorem 2.14. ( if and only if there is a coefficient Hilbert space U and a contractive multiplier θ ∈ S nc,d (U, Y) so that if and only if the associated contractive multiplier θ is inner. Proof. We first verify sufficiency in statement (1). Suppose that M has the form M = θ · H 2 Y (F d ) for a contractive multiplier θ with M-norm given by (2.60). From the fact that M θ ≤ 1 it is easily verified that θ · f H 2 Y (F d ) ≤ θ · f M , i.e., (a) holds. From the intertwining property S R j M θ = M θ S R j (note that S R j is multiplication by z j on the right while M θ is multiplication by θ on the left), and property (c) follows. Finally, a short computation shows that . By hypothesis (c) we may then choose the operator C : M → U so that A) is an isometric pair and, by hypothesis (d), A * is strongly stable. Thus by part (2) of Proposition 2.3 it follows that the observability operator As observed for the general case in part (1) of Theorem 2.8, we have the intertwining condition Taking adjoints then gives is the inclusion map. From hypothesis (a) that ι ≤ 1, we see that Θ ≤ 1. From the intertwining relation (2.62) (together with hypothesis (b)) it follows that ΘS R j = S R j Θ and it follows (see e.g. [46]) that Θ is a multiplication operator, i.e., there is a contractive multiplier θ ∈ S nc,d (U, Y) so that Θ = M θ . From the fact that is an isometry, it follows that Ran( O C,A ) * = M and also that M = θ · H 2 U (F d ) with M-norm given by (2.60). This completes the proof of necessity in statement (1) of Theorem 2.14 for the general case. Since, as was observed above, O C,A is isometric, it follows that and hence Ran O C,A is invariant under S R j for j = 1, . . . , d. As Ran O C,A is also invariant under (S R j ) * for each j by (2.62), we conclude that Ran O C,A is reducing for S R . Since Ran C is dense in U by construction, we are now able to conclude that Ran O C,A is all of H 2 U (F d ) and hence O C,A : M → H 2 U (F d ) is actually unitary. It then follows finally that Θ = ι • ( O C,A ) * is isometric and hence θ is inner as asserted. This completes the proof of Theorem 2.14. A second application of these ideas is to operator model theory. For this application we are given only an operator-tuple T = (T 1 , . . . , T d ) ∈ L(H) d which is a row contraction, so We apply the ideas of the previous sections concerning the general pair (C, A) to a pair of the special form (D T * , T * ). For simplicity we assume in addition that T * is asymptotically stable, i.e., Then we have the following dilation result. Theorem 2.15. Suppose that T = (T 1 , . . . , T d ) is a row contraction with T * asymptotically stable as above and define the defect operator D T * and the coefficient space Y as in (2.63). Then there is a subspace M ⊂ H 2 Y (F d ) invariant for the backward shift operator-tuple S R * on H 2 Y (F d ) so that T is unitarily equivalent to P M S R | M . In particular, T has a row-shift dilation unitarily equivalent to S R on H 2 Y (F d ). Proof. By the same arguments as in the proof of Theorem 2.14, we see that then O D T * ,T * implements the unitary equivalence between T and P M S| M as wanted. Remark 2.16. In the classical case d = 1, the procedure for constructing the unitary dilation of a contraction operator via the observability operator as in the proof of Theorem 2.15 corresponds to the construction of Douglas (see [27]) (see also [40, Section I. 10.1]) which is an alternative to the more popular Schäffer-matrix construction of the unitary dilation (see [40,Section I.5]). Popescu (see [41,42]) used an analogue of the Schäffer-matrix construction to construct the row-unitary dilation of a row-contraction operator-tuple. From the existence of this dilation, he went on to verify a von Neumann inequality (see [45]): for any polynomial p ∈ C z in the noncommuting variable z = (z 1 , . . . , z d ). He returned to this topic in [48] to give another proof of the von Neumann inequality It is argued in [48] (as well as in [22] in the context of the classical case) that this is an elementary (i.e., dilation-free) proof of the von Neumann inequality. Indeed, as argued in [22], this proof of the von Neumann inequality goes back to the paper of Heinz [33]. However, we would argue that the dilation is very near the surface in this proof as well, since the Poisson kernel, i.e., the observability operator O D T * ,T * , provides the factorization of the Poisson transform (2.64) and is also the operator embedding the state space H into the dilation space H 2 Y (F d ) in the Douglas approach to dilation theory. A key combinatorial fact is that We then consider the symmetric Fock space ℓ 2 Y (SF d ) equal to the subspace of ℓ 2 Y (F d ) spanned by the elements χ n y (n ∈ Z d + and y ∈ Y) where χ n is given by Note that and hence, if B is an orthonormal basis for Y, then an orthonormal basis for ℓ 2 It is then natural to identify ℓ 2 Y (SF d ) with the weighted sequence space ℓ 2 w,Y (Z d + ) consisting of all Y-valued Z d + -indexed sequences {f n } n∈Z d + for which the norm given by is finite. We abbreviate ℓ 2 w,C (Z d + ) to ℓ 2 w (Z d + ) and observe that The commutative d-variable Z-transform with inner product given by Then it follows that the set { |n|! n! λ n : n ∈ Z d + } is an orthonormal basis for ℓ 2 w (Z d + ). By general principles concerning reproducing kernel Hilbert spaces we see that H(k d ) is a reproducing kernel Hilbert space of functions analytic on the unit ball with reproducing kernel k d (λ, ζ) given by (see e.g. [7]). This justifies the notation H(k d ) for the space. In analogy to (3.3) we will use notation H Y (k d ) := H(k d ) ⊗ Y for the tensor product Hilbert space that is characterized by If we define the map Π by then each basis vector χ v ∈ ℓ 2 (F d ) (v ∈ F d ) is mapped via Π to its abelianization χ n ∈ ℓ 2 w (Z d + ) and then Π is extended to the whole space ℓ 2 (F d ) via linearity. The norm on ℓ 2 w (Z d + ) is arranged so as to make Π a coisometry from ℓ 2 Y (F d ) onto ℓ 2 w,Y (Z d + ) with initial space equal to ℓ 2 Y (SF d ) and with kernel equal to the subspace If we introduce the Z-transformed version Π : then similarly Π is a coisometry from H 2 (F d ) onto H(k d ) with initial space equal to the subspace This gives the natural link between the Fock-space norm on formal power series and the Arveson-space norm on analytic functions on the unit ball and is the basis for the application of noncommutative results to prove commutative results in [6,25,47]. By a commutative d-dimensional linear system we mean a linear system with evolution along the integer lattice Z d + rather than along the free semigroup F d . A particular type of such a system is a system of the Fornasini-Marchesini form given by (1.18). If we specify an initial condition x(0) = x 0 ∈ X along with an input sequence {u 0 (n)} n∈Z d + and impose the boundary conditions that x(n) = 0 whenever n is outside the positive orthant Z d + , then the system equations uniquely determine a full system trajectory {u(n), x(n), y(n)} consistent with x(0) = x 0 and u(n) = u 0 (n) for n ∈ Z d + . If Π is the projection map introduced in (3.4) formally extended to be defined on all F d -indexed sequences to generate a Z d + -indexed sequence Here we use the convention that for v a word in F d and k ∈ {1, . . . , d} a letter and that Since the commutative Fornasini-Marchesini system (1.18) is just the abelianization of the noncommutative Fornasini-Marchesini system (1.13) and we have already derived the formula (2.6) for the solution of the noncommutative initial-value problem, we see that the solution of the initial-value problem for the commutative Fornasini-Marchesini system (1.18) is simply the abelianization of the corresponding formula for the noncommutative case: where the transfer function T Σ (λ) for the commutative Fornasini-Marchesini system is given by This gives a derivation of the transfer function relationship (3.5) (via the connection with noncommutative systems) which is an alternative to the usual direct approach via commutative multivariable Z-transform (see e.g. [14]). The zero input string simplifies the system to Given a pair (C, A), we have the option of considering (C, A) as coming from a noncommutative or a commutative system. If we consider the associated noncommutative system, the output string associated with initial state x(∅) = x (and zero input string) is the Y-valued function on F d given by A) is considered output stable if this output string is in ℓ 2 Y (F d ) for all x ∈ H. We say that the commutative system (3.6) is output stable (and in this case we will say that the pair (C, A) for all choices of initial state x ∈ X . We note that Π O C,A x can be computed explicitly as Thus another equivalent formulation of a-output stability is: A pair (C, A) is a-output stable means that the function C(I − Z(λ)A) −1 x belongs to H Y (k d ) for every x ∈ H, or equivalently (by the closed graph theorem), the operator O a C, is bounded. The inverse Z-transform sends the function and and a pair (C, A) is a-output stable if and only if O a C,A is bounded as an operator from X into ℓ 2 w,Y (Z d + ). In this case it makes sense to introduce the observability gramian and its representation in terms of strongly converging series follows immediately by definitions (3.9), (3.7) and the formulas for the inner products in ℓ 2 w,Y (Z d + ) and H Y (k d ). Definition 3.2. We say that the pair (C, A) is a-observable if G a C,A is positivedefinite and exactly a-observable if G a C,A is strictly positive definite. By Theorem 2.2 (2) we know that the observability gramian G C,A satisfies the Stein equation (2.15). It turns out that the abelianized observability gramian G a C,A satisfies a reverse Stein inequality (the reverse of (2.14)). Let (C, A) be an a-output-stable pair and let G a C,A be the abelianized observability gramian (3.10). Then G a C,A satisfies the reverse Stein inequality (3.11) Moreover, the following are equivalent: (1) Equality holds in (3.11). (2) A is C-abelian in the sense that The observability gramian and the abelianized observability gramian are identical: Proof. It suffices to show that the operator Q given by is positive semidefinite. To this end, plug (3.10) into (3.13) to get where Q N is given by We introduce the notation and extend the notation to the all of Z d by With these definitions we have the equality where e 1 , . . . , e d ∈ Z d + are defined in (1.19). Write formula (3.15) in terms of (3.16) as Upon rearranging the terms in the first series in (3.19) and substituting formula (3.18) into the second series, we arrive at We now consider the terms in (3.20) that correspond to a fixed n = (n 1 , . . . , n d ) ∈ Z d + (with |n| = N ). Denoting the sum of these terms by S n we have Plugging this into the right hand side in (3.21) leads us to (3.23) Representation (3.22) implies that S n is positive semidefinite and therefore Q N ≥ 0 for every N ∈ N. By (3.14), the operator Q defined in (3.13) is positive semidefinite which completes the proof of (3.11). We now show the equivalence of (1), (2) and (3) in the second part of Proposition 3.3. Proof of (1) =⇒ (2): Assume condition (1), i.e., that the reverse Stein inequality (3.11) is satisfied with equality. Then representation (3.22) implies that R n,i,j = 0 for all n ∈ Z d + . By (3.23), this means that Now we shall prove (3.12) by induction (on the length of words v, u ∈ F d ). The basis of induction ( |v| = |u| = 0) is trivial. Assume that (3.12) holds true, whenever |v| = |u| < N . Then in particular, we have for every m ∈ Z d + with |m| < N : Now take two words v, u ∈ F d of the length N and let a(v) = a(u) =: n = (n 1 , . . . , n d ). (3.26) If v = vi and u = ui for some v, u ∈ F d and i ∈ {1, . . . , d}, then we have CA v = CA u by the induction hypothesis and therefore, Let v = vi and u = uj for some i, j ∈ {1, . . . , d} and i = j. By (3.26), a( v) = n − e i and a( u) = n − e j . By (3.25), we have Multiplying (3.27) and (3.28) on the right by n i A j and n j A i respectively, we get By (3.24), the left hand side expressions in the two latter equalities are equal. Upon comparing the right hand side expressions we get CA v = CA u , i.e., A is C-abelian as wanted. Proof of (2) =⇒ (3): Assume now that A is C-abelian, i.e., that (3.12) holds. Then the identify G a C,A = G C,A is an immediate consequence of the series representations (3.10) and (2.10) for G a C,A and G C,A respectively. Proof of (3) =⇒ (1): We know from Theorem 2.8 (2) that G C,A satisfies the Stein equation, i.e., G C,A satisfies the Stein inequality (3.11) with equality. Hence trivially G a C,A satisfies (3.11) with equality whenever G a C,A = G C,A . This completes the proof of Proposition 3.3. A) is an output-stable pair, then by Theorem 2.2 (2) G C,A satisfies the Stein equation (2.15) and hence in particular , We now show that, for the abelianized case, the inequality in the reverse Stein inequality satisfied by the abelianized observability gramian G a C,A can be strict in the strong sense that the quantity G a C, is not even positive semidefinite. As an example, let A straightforward calculation shows that which is not positive semidefinite. Condition (3.12) is worth a formal definition. Definition 3.5. Let C ∈ L(X , Y). A d-tuple A = (A 1 , . . . , A d ) of bounded operators on X will be called C-abelian if (3.12) holds. One obvious way for a given operator d-tuple A to be C-abelian is for A itself to be commutative, i.e., for A i A j = A j A i for all 1 ≤ i, j ≤ d. We next show that, under an observability assumption, this is the only way. Proposition 3.6. Suppose that the output-stable pair (C, A) is observable and that A is C-abelian. Then the d-tuple A is commutative. Proof. Since A is C-abelian, relations (3.12) hold. Fix i, j ∈ {1, . . . , d} and note that by (3.12), since a(vij) = a(vji). Thus, for every v ∈ F d and x ∈ X . Since the pair (C, A) is observable, we have by (2.13) holding for every x ∈ X , which proves the commutativity relations We next show that the observability gramian always dominates the abelianized observability gramian. Proposition 3.8. Let (C, A) be an output-stable pair. Then: (1) (C, A) is also a-output-stable with (2) Equality occurs in (3.29) if and only if A is C-abelian: Proof. Note that the second statement in Proposition 3.8 is just a restatement of (2) ⇐⇒ (3) in Proposition 3.3. Thus it suffices only to prove the first statement. By definition, output-stability of (C, A) simply means that G C,A is bounded, while a-output stability means that G a C,A is bounded. The fact that a-output stability follows from output-stability therefore follows immediately from the general inequality (3.29). Thus it suffices to prove (3.29). For this purpose, recall that By the Cauchy-Schwarz inequality we have Therefore and (3.29) follows as wanted. Example 3.9. The converse of Proposition 3.8 part (1) can fail, i.e., there exists an output pair (C, A) which is a-output-stable but not output-stable. For example take Then and thus (C, A) is a-output stable. To show that (C, A) is not output stable, note that and therefore, C(A 1 A 2 ) n = 2 n 0 0 , so that for and therefore, the pair (C, A) is not output-stable. We conclude that a-outputstability has no obvious characterization in terms of positive semidefiniteness of some solution of a Stein inequality as in the noncommutative case (see Theorem 2.2 (2)). As a corollary of the gramian inequality (3.29) in Proposition 3.8, we have the following. Corollary 3.10. Let (C, A) be an output-stable pair. Then: A) is a-observable (respectively, exactly a-observable, then (C, A) is also observable (respectively, exactly observable). Assume therefore that Ker G a C,A is invariant under A j for each j = 1, . . . , d. Let x be a vector in Ker G a C,A . Then by the assumed invariance, A u x ∈ Ker G a C,A for every u ∈ F d . Then we have CA vu x = 0 for every u ∈ F d . Then letting n = 0 we get CA u x = 0 for every u ∈ F d and therefore, x ∈ Ker G C,A . Thus, Ker G a C,A ⊂ Ker G C,A and since the reverse inclusion holds by the first statement, equality follows. Example 3.11. We observed in part (1) of Corollary 3.10 that a-observability for an output-stable pair (C, A) implies observability. We now give an example to show that the converse can fail, i.e., there exists an output-stable observable pair which is not a-observable. For this purpose, let d = 2, X = C 4 , Y = C, C = 0 0 0 1 and A = (A 1 , A 2 ), where Then the pair (C, A) is output stable. Now we show that (C, A) is observable but not a-observable. Indeed, since we have A) is not a-observable we first compute A straightforward calculation gives Note that y 1 y 2 y 3 y 4 := C( is the bottom row of the matrix (I − λ 1 A 1 + λ 2 A 2 ) −1 and we use the standard adjoint formula for the inverse of a matrix to get Then it follows that the nonzero vector x = 0 1 0 0 ⊤ satisfies and therefore, the pair (C, A) is not a-observable. 3.2. Observability-operator range spaces and reproducing kernel Hilbert spaces: the commutative-variable case. We seek the analogue of Theorem 1.2 for the commuting multivariable case. We extend multivariable power notation (3.1) to any d-tuple A = (A 1 , . . . , A d ) of commuting operators on a space X : Note the connection between the commutative powers A n (with n ∈ Z d + ) and the noncommutative powers A v (with v ∈ F d ) in case A is a commutative operator d-tuple: for any operator X on X . In case (C, A) is an output stable pair with A a commutative operator d-tuple, the formulas (3.7), (2.10) and (3.10) for O a C,A , G C,A and G a C,A collapse (in view of (3.2)) to and The following proposition includes the analogue of Proposition 2.9 for the present commutative setting. Then: (1) For every f ∈ H Y (k d ) and every λ ∈ B d we have (2) The pair (G, M * λ ) is isometric: (3.39) Proof of (1): One can easily verify the identity (3.37) on monomials y · λ m (with y ∈ Y and m ∈ Z d + ) using (3.34). Then the result follows for all f ∈ H 2 Y (k d ) by linearity and continuity. Proof of (2): Note that G * : Y → H Y (k d ) is the identification of a vector y ∈ Y with the constant function y ∈ H Y (k d ). We then see that (3.38) is simply the operator expression of (3.37). Proof of (3): From (3.35) and (3.36) we see that and therefore, according to definition (3.31), Since the latter equality holds for every f ∈ H Y (k d ), (3.39) follows as asserted. Proof of (4): This can be derived directly from (3.35) or via Proposition 2.7 since O a G,M * λ = I and therefore, the pair (G, M * λ ) is exactly observable. Remark 3.13. Note that in contrast to the noncommutative case (Proposition 2.9), the operator is not unitary (just isometric). A simple calculation shows that If a pair (C, A) is a-output stable, then the observability operator O a C,A : X → H Y (k d ) is bounded and its range is a linear manifold in H Y (k d ). We have the following partial analogues of part (3) of Theorem 2.8. Theorem 3.14. Let (C, A) be an a-output stable pair. Then: where Q a is the orthogonal projection of X onto (Ker G a C,A ) ⊥ is isometrically equal to the reproducing kernel Hilbert space H(K a C,A ) with reproducing kernel K a C,A (λ, ζ) given by . We next discuss separately the case where A is C-abelian and then the general case. H(K a C,A ) for the case where A is C-abelian. In case (C, A) is an a-outputstable pair with A C-abelian, then we have the following commutative analogue of Theorem 2.10. Theorem 3.15. Let (C, A) be a contractive a-output-stable pair such that operator d-tuple A is C-abelian. Then: (1) The intertwining relations hold, and hence the linear submanifold . This mapping is isometric if and only if (C, A) is isometric and A is strongly stable. C,A is given the lifted norm (3.42) (so M is isometrically equal to H(K a C,A ) by Theorem 3.14 (1)), then the difference-quotient inequality holds for every f ∈ H(K a C,A ). Moreover, the difference-quotient identity holds for every f ∈ H(K a C,A ) if and only if the subspace (Ker G C,A ) ⊥ is A-invariant and the restriction (C 0 , A 0 ) (defined in (2.53)) of (C, A) to the subspace (Ker G C,A ) ⊥ is isometric. Proof. By (3.31) and (3.34), we have for every x ∈ X , (3.43) follows. This completes the proof of statement (1) in the theorem. Since the pair (C, A) is contractive and A is C-abelian, we have and conclude that the pair (C, A) is contractive. Similarly, assumption (3.46) means that the chosen pair (C, Taking strong limits as N → ∞ and noting that I = G C,A = v∈F d A * v ⊤ C * CA v then gives We have the following analogue of Theorem 2.13 for the present commutative situation. Theorem 3.17. Suppose that (C, A) and ( C, A) are two observable output-stable pairs with both A and A commutative such that K a C,A (λ, ζ) = K a C, A (λ, ζ) for all λ, ζ ∈ B d . Then there is a unitary operator U : X → X such that C = CU and A j = U −1 A j U for j = 1, . . . , d. (3.49) Proof. Suppose that (C, A) and ( C, A) are as in the hypothesis of the theorem. The identity of the kernels K a C,A and K a C, A implies equality of the respective coefficients of λ n ζ m for each n, m ∈ Z d + : If we define a mapping U by it follows that U extends by linearity to an isometry from D := span{A * m C * y : m ∈ Z d + and y ∈ Y} onto R := span{ A * m C * y : m ∈ Z d + and y ∈ Y} Since both (C, A) and ( C, A) are observable, we see that D is dense in X and that R is dense in X . Hence U extends to a unitary operator from X onto X by continuity. From the defining equations (3.50) for U we see that U C * = C * and U A * j = A * j U. By taking adjoints and using that U is unitary, we arrive at the intertwining equations (3.49) as wanted. Theorem 3.17 can be adapted to give the following result concerning containment between two backward-shift-invariant subspaces rather than equality; the finitedimensional case appears as Proposition 1.2 in [19]. H(K a C,A ): The general case. In case the a-output-stable pair (C, A) is such that A is not C-abelian, it can happen that the associated reproducing kernel Hilbert space is not invariant under the backward-shift tuple M * λ , as the following example shows. Then a straightforward calculation gives Thus K a C,A (λ, w) is positive definite on B 2 × B 2 and the space H(K a C,A ) is spanned by the two rational functions Furthermore, since and since 4λ The latter function is rational if and only if the single-variable function F (z) = For the general case, there is a simple replacement for M * λ | H(K a C,A ) . Specifically, given an a-output-stable pair (C, A), we define an operator-tuple T = (T 1 , . . . , T d ) on Ran O a C,A by We then have We next give the following analogue of Theorem 3.15 for the general case. (1) The Z-transformed observability operator O a C,A is a contraction of X into the reproducing kernel Hilbert space H(K a C,A ). It is an isometry if and only if the the pair (C, A) is a-observable. where T 1 , . . . , T d ∈ L(H(K a C,A )) are the operators defined in (3.53). (4) Equality holds in (3.56) for every f ∈ H(K a C,A ) if and only if the subspace (Ker G C,A ) ⊥ is A-invariant and the restriction (C 0 , A 0 ) (defined in (2.53)) of (C, A) to the subspace (Ker G C,A ) ⊥ is isometric. Since the pair (C, A) is contractive, the identity operator H = I X solves the Stein inequality (2.14). Then G a C,A ≤ G C,A ≤ I X (by part (1) of Proposition 3.8 and part (2) of Theorem 2.2). Thus, where Q a is the orthogonal projection of X onto (Ker G a C,A ) ⊥ . Therefore it holds for every x ∈ X that We have the equality instead of the first inequality in (3.57) if and only if G a C,A = Q a , that is, if and only if O a C,A is a partial isometry. Furthermore, the second inequality in (3.57) can be replaced by equality if and only if Q a = I X , i.e., if and only if the pair (C, A) is a-observable. This completes the proof of the two first assertions in the theorem. The multivariable difference-quotient relation (3.55) follows by the calculation (3.54). Furthermore, for every . Therefore, by (3.53), T j is unitarily equivalent to the compression of A j to (Ker G a C,A ) ⊥ and hence T j ≤ A j for j = 1, . . . , d. In particular, T j ∈ L(H(K a C,A )). For where the first inequality holds since Q a ≤ I and the second since (C, A) is a contractive pair. This proves inequality (3.56) and it is readily seen that equalities hold throughout in the last calculation for every x ∈ (Ker G a C,A ) ⊥ if and only the subspace (Ker G a C,A ) ⊥ is A-invariant and the restriction (C 0 , A 0 ) (defined in (2.53)) of (C, A) to the subspace (Ker G a C,A ) ⊥ is isometric. Finally suppose that H(K a C,A ) is included isometrically in H Y (k d ). Then the assumption (3.56) becomes Then we take the inner product of both parts in equality (3.54) with f : . Thus, . Then M is isometrically equal to a reproducing kernel Hilbert space H(K a C,A ) for a contractive pair (C, A). Therefore, M is contractively included in the Arveson space H Y (k d ). Proof. Take C = G| M where G is given by (3.62), A = T on M. Then (3.61) says that (C, A) is contractive. Iteration of (3.55) says that, for each f ∈ M, This unravels to the tautology and we also have the contractive inclusion property. Combining Theorems 3.20 and 3.21 gives the following uniqueness result for contractive solutions of the Gleason problem on a subspace M contained in H Y (k d ) isometrically. (5) in Theorem 3.20 then asserts that the subspace M = H(K a C,A ) is M * λ -invariant and that T j = M * λj for j = 1, . . . , d. We note that the proof of Theorem 2.13 is like the proof of the State-Space-Isomorphism Theorem for structured noncommutative multidimensional linear systems in [12]. It is known that the State-Space-Isomorphism Theorem (and related Kalman reduction procedure) fails in general for commutative multidimensional linear systems-see e.g. [30] for a recent account of the situation. The fact that uniqueness does hold in the special commutative situation in Theorem 3.17 shows that the technique in the proof of the State-Space-Isomorphism Theorem is salvageable in special commutative situations. A uniqueness result for solutions of the Gleason problem somewhat different from that in Theorem 3.22 was obtained in [3]; rather than assuming that T is a contractive solution of the Gleason problem on M = H(K C,A ) contained isometrically in H Y (k d ) as in Theorem 3.22, Alpay and Dubi in [3] assume instead that T is a commutative solution of the Gleason problem and are then able to conclude that necessarily T = M * λ | M . This latter result can be seen as an immediate consequence of our Theorem 3.17 above since, by the construction in the proof of Theorem 3.21, solutions (C, A) of K a C,A = K are in one-to-one correspondence with solutions T of the Gleason problem. We illustrate the preceding analysis by two examples. 3.4. Applications of observability operators: the commutative setting. In this subsection we discuss applications of observability operators for the commutative setting. This subsection parallels Subsection 2.3. For subspaces of H Y (k d ) invariant under the forward shift operator-tuple M λ , we have the following analogue of the Beurling-Lax-Halmos-de Branges theorem due originally to Arveson [7] and McCullough-Trent [38] (for the case of isometric inclusion); in fact, one can check that our proof, namely, the commutative adaptation of the proof of Theorem 2.14, follows that of [8] if one makes the substitution L = ( O a D T * ,T * ) * (where L is the key operator appearing in [8]). In general, an operator Θ between two Arveson spaces H U (k d ) and H Y (k d ) is said to be multiplier if Θ intertwines the respective coordinate-function multipliers: θM λj f = M λj θf for all f ∈ H U (k d ). It is straightforward to see that a multiplier Θ necessarily has the form Θ = M θ : f (z) → θ(z) · f (z) where θ(z) = n∈Z d + θ n z n is a bounded, holomorphic L(U, Y)-valued function on B d , but not all bounded, holomorphic, operator-valued functions on B d are multipliers (see e.g. [1]). In case the multiplication operator has operator norm at most 1, we say that θ is a contractive multiplier and belongs to the (commutative) multivariable Schur-class S d (U, Y). Unlike the convention in the classical case, such a multiplier θ is said to be inner if in addition M θ is a partial isometry. if and only if there is a coefficient Hilbert space U and a contractive multiplier θ ∈ S d (U, Y) so that M = θ · H U (k d ) with lifted norm where Q is the orthogonal projection onto (Ker M θ ) ⊥ ⊂ H U (k d ). (2) The subspace M in part (1) above is isometrically contained in H Y (k d ) if and only if the corresponding contractive multiplier θ ∈ S d (U, Y) can be taken to be inner. Proof. The proof is a straightforward commutative adaptation of the proof of Theorem 2.14 and hence will be left to the reader. We remark that, for the case where M is contained isometrically in H Y (k d ), we are unable to obtain a representer θ for which M θ is isometric but rather only a representer with M θ partially isometric. Indeed, one can check that the argument in the proof of Theorem 2.14 breaks down because, for the case here, M λj is only contractive rather than isometric. Remark 3.26. As observed in [8], from the function-theory point of view Theorem 3.25 is not a true analogue of the classical Beurling-Lax theorem since the characterization of θ is purely operator-theoretic with no information on the boundary behavior of the associated multiplier θ(z). This deficiency has now been remedied in the paper of Greene-Richter-Sundberg [32]. The following is the analogue of Theorem 2.15; we omit the proof as it exactly parallels the proof of Theorem 2.15. The result goes back to Drury [28]. Theorem 3.27. Suppose that T = (T 1 , . . . , T d ) is a commutative row-contractive operator-tuple with T * asymptotically stable and define the defect operator D T * and the coefficient space Y as in (2.63). Then there is a subspace M ⊂ H Y (k d ) invariant for the backward shift operator-tuple M * λ on H Y (k d ) so that T is unitarily equivalent to P M M * λ | M . In particular, T has a Arveson-shift dilation unitarily equivalent to M λ on H Y (k d ). Remark 3.28. The result in Theorem 3.27 is tied to the unit ball with associated multivariable resolvent operator (I − λ 1 T * 1 − · · · − λ d T * d ) −1 , associated defect operator D T * = (I − T 1 T * 1 − · · · − T d T * d ) 1/2 , associated observability operator of the form O a D T * ,T * = D T * (I − λ 1 T * 1 − · · · − λ d T * d ) −1 and associated ambient kernel function k(λ, ζ) = 1/(1 − λ 1 ζ 1 − · · · − λ d ζ d ). We mention that there has been a lot of work centering around other types of kernels and giving a model theory for other classes of operator-tuples by using appropriately modified observability-like operators. Specifically, Müller-Vasilescu [39] for the commutative ball case with k(λ, ζ) = 1/(1 − λ 1 ζ 1 − · · · − λ d ζ d ) m , Curto-Vasilescu [23,24] for the commutative polydisk case with k(λ, ζ) = (1/(1−λ 1 ζ 1 ) · · · (1−λ d ζ d )) m , and Pott [51] and Bhattacharyya-Sarkar [18] for the commutative case with k(λ, ζ) = 1/(1 − P (λ 1 ζ 1 , . . . , λ d ζ d )) with P equal to a "positively regular polynomial". The most general form of results along this line is due to Ambrozie-Engliš-Müller [4] and Arazy-Engliš [5]: given a positive-definite kernel k(λ, ζ) on a domain D ⊂ C d and a d-tuple of operators T = (T 1 , . . . , T d ) with Taylor spectrum contained in D for which one can make sense of the defect operator D T * := 1 k (T, T ) and of the observability operator O D T * ,T * : x → D T * k(λ, T ) (for example, if k(λ, ζ) has no zeros in D × D and T has Taylor spectrum contained in D), then, under the assumption that D T * ≥ 0 and that an additional stability condition on T * holds, O D T * ,T * implements a unitary equivalence between T and . . , M * λ d . The noncommutative case is not as well developed at this writing, but there is the paper of Popescu [48] which handles the case of a Cartesian product of noncommutative balls (and therefore including a noncommutative polydisk). We expect that many of the ideas of the present paper, including the interplay between the noncommutative and commutative settings and the connections with system theory, have some parallels in these other situations.
2014-10-01T00:00:00.000Z
2006-10-20T00:00:00.000
{ "year": 2006, "sha1": "42b89add7ac15b0fcbe1179f9bedd6bf28cb131e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0610634", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "42b89add7ac15b0fcbe1179f9bedd6bf28cb131e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
214176762
pes2o/s2orc
v3-fos-license
Bilateral Karapandzic flap: a work horse for carcinoma of lower lip Lips are not only an important aesthetic component of the face but are also necessary for facial expression, speech, and eating. Malignant lesions involving the midline of the lower lip warrant a wide excision to ensure a disease-free margin. The resultant defects are usually large and often involve two-thirds or more of the lower lip. Larger defects require the use of local/distant flaps. An ideal reconstruction technique would involve a single-stage procedure that replaces the defect with similar tissue, restore aesthesis, maintaining oral competence, function, and is reliable. There are various techniques for reconstruction of these techniques like Abee, Estlander, Bernard-Burrow, Gilles, Karapandzic or combinations. Karapandzic flap is a neurovascular myocutaneous flap that offers a very satisfactory result and was described by Karapandzic in 1974. In this case, we have chosen Karapandzic flap as it has been proved to have a clear advantage because of maintaining oral competence by preservation of orbicularis oris muscle, facial artery, sensory and motor nerves. lip and involved the facial skin ( Figure 1a). Neck examination showed enlarged level 1b nodes measuring 1×1 cm, mobile, and firm in consistency on both sides (cT3N2cM0). Wedge biopsy from the lesion came as moderately differentiated squamous cell carcinoma. Contrast-enhanced computed tomography (CECT) showed a heterogeneous, irregular ulcero-proliferative mass lesion measuring 4.6×2.1×2.0 cms in lower lip involving inferior orbicularis oris, and few nonenhancing central portions suggestive of necrotic changes. CECT neck showed an enlarged necrotic lymph node in left level 1B measuring 2.8×1.8 cm with peripheral enhancement. Enlarged right level 1B lymph node measuring 1×1 cm showing heterogenous enhancement with minimal internal necrotic changes. The patient was then taken up for wide local excision with left modified radical neck dissection and right supraomohyoid neck dissection and reconstruction. After surgical resection of the tumor, the lower lip defect was assessed and planned for bilateral Karapandzic flap. In this case, right Karapandzic flap was raised based on the facial artery, but the left Karapandzic flap was raised on a random blood supply as the facial vessels were ligated to achieve tumor clearance from the left submandibular area. In our case, before suturing the flap, both the upper central incisor teeth were removed to facilitate the food intake. Postoperative period was uneventful but had microstomia which the patient was able to manage by taking feeds with a spoon. Histopathology showed moderately differentiated squamous cell carcinoma pT3N1 with close margins and without lymphovascular invasion. The patient was advised adjuvant radiotherapy and was in regular follow up after radiotherapy. DISCUSSION Lips have aesthetic appeal and play an essential role in maintaining oral competence, which in turn depends on normal morphology with intact sensory and motor nerve supply. 1 The risk factors for the development of squamous cell carcinoma in the lower lip are UV rays from chronic sun exposure, previous radiotherapy, and old age. Squamous cell carcinoma is more common in the lower lip than the upper lip. Oncological resection is the most common cause of large lip defects. Other etiology include trauma, burns, infectious diseases, hemangioma, and congenital clefts. 3 The resulting defects usually involve two-thirds or more of the lower lip. These defects are reconstructed using local flaps and free flaps. Abbe flap is a full-thickness flap, which is raised based on the labial artery. The height of the flap should be similar to the height of the defect, and the width of the defect should be half that of the defect. 4 The main advantage is the ability to replace mucosal and vermillion surface of lips. The demerits of this flap are microstomia, and it is a 2-staged procedure with less patient compliance. 1 Estlander flap is suitable for commissural defects, and it is done as a single-stage procedure. Gilles flap is also a full-thickness flap which projects lower lip lateral commissure to cover the defect of the lower lip. 4 Webster Bernard Burrow flap is also used to reconstruct lower lip defects. 5 In this flap, the cheek and remaining lip were rotated medially, and the buccal mucosa was advanced to create the vermilion surface. As the advanced new tissue lacks sensation and sphincteric action, there will be loss of oral competence. 6 Karapandzic flap is a neurovascular myocutaneous flap based on superior and inferior labial arteries. It has the advantage of preserving motor and sensory nerve supply with intact orbicularis oris muscle fibers, which provide good oral competence. 7 Preserved blood supply improves the survival of the flap. It is ideal in situations where no new lip tissue is required in central defects or lateral defects that involve the Commissure. Cases with the previous history of radiotherapy might have interference with blood supply. So, the preirradiated area may not be ideal for this reconstruction. 1,3 As no new lip tissue is recruited, microstomia may result after the closure of larger defects. Loss of commissure is also a disadvantage in this reconstruction method. 8 Free flaps like radial forearm flap, parascapular flap, anterolateral thigh, and lateral arm flap are commonly used when there is insufficient adjacent cheek tissue because of their excellent color match with facial skin, pliability, and thin tissue mass. Disadvantages are lack of functional muscles, lack of motor innervation, and voluntary tightening of the lip. 9 For defects involving the central lip of more than twothirds, there is a need for raising bilateral Karapandzic flap. If there is involvement of commissure on one side, only unilateral Karapandzic flap is sufficient. Lip anatomy and operative technique The oral sphincter is composed of the circumferential fibers of the orbicularis oris muscle and the radial orientation of the elevators and depressors from its outer margins. The sensory and motor nerve supply and labial vessels of lips also enter into this area in a radial fashion, which forms the basis of Karapandzic flap. A curvilinear incision was given, extending from the defect towards alar base along nasolabial folds, with the width of flap equal to the height of defect (Figure 1b). The incision was deepened through the skin and subcutaneous tissues, after which blunt dissection was done in a radial fashion along the incoming nerves and vessels that should be preserved, to detach the lateral margin of the orbicularis oris from its attachments to obtain the required mobility. Although Karapandzic flap is based on facial vessel branches, in our case, because of the need for tumor clearance from the submandibular area on the left side, the flap was raised on random blood supply (Figure 1c). On the right side, it is harvested based on the facial artery. Mucosal incisions were given near the margins of the defect to enable closure. Upper incisors were removed to facilitate food intake later on. Then the flaps were mobilized and rotated medially into the defect. (Figure 1d), and closed in the center by tension-free sutures (Figure 1e). In a prospective study of 7 patients who underwent Karapandzic flap reconstruction for carcinomas with defects ranged from 40% to 75% of lip circumference, the functional and aesthetic outcome was considered excellent/good in 85% of cases. 10 If there is post-operative microstomia with functional compromise, then the patient can be advised for commisuroplasty. 11 In this case, postoperatively, there were no wound complications, and the outcome was considered satisfactory except for microstomia ( Figure 1f).
2020-01-30T09:14:29.289Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "b1c8185a7aeb9d44ca71a85de309eb99c36b4e1d", "oa_license": null, "oa_url": "https://www.ijorl.com/index.php/ijorl/article/download/1852/1113", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1bf2c319a24e6f5129d5dba5847ed72ba22c7455", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16831578
pes2o/s2orc
v3-fos-license
Pathogenicity of a Microsporidium Isolate from the Diamondback Moth against Noctuid Moths:Characterization and Implications for Microbiological Pest Management Background Due to problems with chemical control, there is increasing interest in the use of microsporidia for control of lepidopteran pests. However, there have been few studies to evaluate the susceptibility of exotic species to microsporidia from indigenous Lepidoptera. Methodology/Principal Findings We investigated some biological characteristics of the microsporidian parasite isolated from wild Plutella xylostella (PX) and evaluated its pathogenicity on the laboratory responses of sympatric invasive and resident noctuid moths. There were significant differences in spore size and morphology between PX and Spodoptera litura (SL) isolates. Spores of PX isolate were ovocylindrical, while those of SL were oval. PX spores were 1.05 times longer than those of SL, which in turn were 1.49 times wider than those of the PX. The timing of infection peaks was much shorter in SL and resulted in earlier larval death. There were no noticeable differences in amplicon size (two DNA fragments were each about 1200 base pairs in length). Phylogenetic analysis revealed that the small subunit (SSU) rRNA gene sequences of the two isolates shared a clade with Nosema/Vairimorpha sequences. The absence of octospores in infected spodopteran tissues suggested that PX and SL spores are closely related to Nosema plutellae and N. bombycis, respectively. Both SL and S. exigua (SE) exhibited susceptibility to the PX isolate infection, but showed different infection patterns. Tissular infection was more diverse in the former and resulted in much greater spore production and larval mortality. Microsporidium-infected larvae pupated among both infected and control larvae, but adult emergence occurred only in the second group. Conclusion/Significance The PX isolate infection prevented completion of development of most leafworm and beet armyworm larvae. The ability of the microsporidian isolate to severely infect and kill larvae of both native and introduced spodopterans makes it a valuable candidate for biocontrol against lepidopteran pests. Introduction The order Lepidoptera is comprised of more than 150000 species [1], some of which are among the world's most serious agricultural and forest pests [2,3,4]. These pests inflict injuries on many types of plants, including crop plants and forest trees, causing huge amounts of loss to the vegetable and forest industries worldwide [5,6] through their feeding on plant parts [7]. This is typical of the caterpillars of the diamondback moth (Plutella xylostella, PX), a major pest of Brassica crops [8]; the beet armyworm (Spodoptera exigua, SE), a pest of more than 90 crop species in at least 18 families [3]; and Spodoptera litura (SL), a pest of many food crops [9] worldwide. The first, PX, is a threat to agricultural crops for several reasons-it has a high degree of genetic diversity and its host plants are widely grown around the world. In Asia, where most of its key natural enemies, such as larval parasitoids, are not abundant [10], PX is considered the most destructive pest of crucifers [11] and was first recorded in northern peninsular Malaysia in 1925 [12]. SE is a pest of cotton, tomatoes, celery, lettuce, cabbage, and alfalfa [13]. The larvae of this species feed on both foliage and fruit, causing serious damage [13], and adults have increased invasive properties as they are capable of migrating over large distances to find suitable habitats [14]. Heavy infestations may occur suddenly when the weather is favorable [15]. In Malaysia, this armyworm is a recently reported invasive pest [16,17]. Its congeneric species, SL is native to South East Asia [18], where attacks cotton, groundnut, rice, tomato, tobacco, citrus, cocoa, potato, rubber, castor, millet, sorghum, maize, many other vegetables [19], weeds, and ornamental plants [20], as well as seedlings [21]. The early larval stages feed preferentially on ''intermediate'' leaves (i.e., those between immature and mature leaves), whereas the fourth instar larvae are capable of consuming most of the leaves [21]. Spodopterans cause substantial crop losses by feeding voraciously on leaves-etching on the bracts of fruiting forms [22,13], which causes heavy loss of flower buds and newly formed fruits [23], scraping the leaf surface, which produces large irregular holes on leaves leaving only midrib veins, skeletonization, and defoliation [24]. Severe infestations often result in cosmetic injuries that can reduce marketability. Efforts to counteract such damage rely heavily on the use of chemical insecticides [25,26]. However, SE [27,28] and SL [29] have developed resistance to most classes of chemical insecticides worldwide [30,31]. Frequent application of insecticides targeted at the beet armyworm did not prevent extensive damage and losses of crops, such as onions, eggplants, and crucifers [17]. Other strategies consist of using sex pheromones to trap adults or Bacillus thuringiensis products as pesticides [3,32]. However, although the latter has at times been successful, such strategies are hampered by the development of resistance [13]. Another promising method involves the use of natural parasites and predators. Although larval parasitoid use has been successful in suppressing spodopteran pest populations in Europe, control attempts based on this approach have been severely impeded by the scarcity of such enemies in Asia. In recent years, there has been a great deal of research regarding microsporidia related to their use in biocontrol of lepidopterans [33]. Almost all such studies have used symbionts of the target species. However, this may not be applicable to invasive species. Juliano and his colleague [34] reported that when a nonnative species escapes the parasites that attack it in its native range, the likelihood of that species achieving high abundance and spreading can be enhanced. These authors also argued that the presence of parasites that are capable of attacking non-native species may help to keep the density of these species low. In Malaysia, SE has become a very important pest following its invasion as it feeds on almost all types of vegetable crop [17]. The braconid Microplitis manilae and the tachinid Peribaea orbata showed increased abilities to parasitize this increasingly abundant pest; however, their generalist nature limits their use as effective biological control agents. Isolation Microsporidium species from many Lepidopterans has been documented [35,36,37]. Almost half of the described genera of microsporidia have an insect as the host [38]. Some microsporidia can produce infection on nontarget hosts. Solter and co-workers [37] found that microsporidia occurring in European populations of Lymantria dispar produce atypical and heavy infections in American lepidopteran species. Microsporidia isolated from lepidopteran hosts were infective toward Lymantria dispar. A Nosema species isolated from a noctuid moth host generated massive infections in L. dispar larvae [39]. SE was shown to be susceptible to a microsporidium isolated from different lepidopterans [39]. A microsporidium isolated from SL larvae was also reported to be pathogenic toward other non-natural lepidopteran hosts [40]. Most studies to identify candidate microsporidia for microbial control of spodopteran pests did not address the likely possibility that spodopteran non-symbiotic microsporidia may also be viable candidates. Therefore, any microsporidium that is not a symbiont but has high infectivity may also be a valuable candidate for control of spodopteran pests. The present study was performed to characterize both morphologically and at the molecular level a microsporidium isolated from wild diamondback moths and to evaluate its effects on invasive and native armyworms. Statement on Ethic Issues This study was conducted in accordance with the principles expressed in the Declaration of Helsinki. The study was approved by the Biological Research Ethics Committee at Universiti Kebangsaan Malaysia. Insects and Experimental Subjects Three lepidopteran pests were included in the present studytwo native species, the diamondback moth (PX) and the leafworm (SL) and one invasive species the beet armyworm (SE). PX and SL larvae were collected from cruciferous vegetable farms in the Cameron Highlands (CHs), Pahang, Malaysia, located at 040279N and 1010229E. CHs lies 1400 m above sea level, with an average temperature of 2262uC and relative humidity of 9065% [41]. It is a mountainous area with approximately 75% of the area above 1000 meters from the sea level [42]. Over 5890 ha of CHs is in use for agricultural purposes [43] with vegetables cultivation representing 47% of the agricultural activities, following by tea (44%), flowers (7%) and fruit (1%) [44]. Two hundreds PX and SL larvae (respectively) were collected from the field. A similar number of third larval instars of SE and SL were provided by the Malaysian Agriculture Research and Development Institute (MARDI), Malaysia, for in vivo study. Field-collected and laboratory-reared larvae of the different lepidopterans were on potted cabbage plants (Brassica oleracea var-capitata, 5-8 fully expanded leaves) kept in screen cages (38 cm626 cm626 cm) under laboratory conditions [temperature 25uC65.0uC, relative humidity 60-80% RH, and photoperiod of 12:12 (L:D) hours] and honey feeding regime (provided of a 10% solution to adults) that were similar to those of Kermani et co-workers [45]. Laboratory larvae were routinely obtained from MARDI when necessary. A total of 250 larvae of SL and 250 larvae of SE had been infected with m-px through their diet. 5 larvae for each treatment groups were sacrificed for every 24 hrs post-infection for spores counting. The remaining 5 larvae were observed for development and mortality. Larvae were sacrificed at the end of experiment for histopathology slides preparation. Observations of Spore Morphology in PX and SL The PX and SL larvae were macerated and ground in a mortar before adding distilled water. The resulting crude suspensions of spores (m-PX and m-SL) were filtered through muslin to remove larval tissues. The suspensions were then centrifuged at 10006g at 10uC for 10 min. The pellets of m-PX and m-SL were resuspended in TE buffer and the spores were purified by mixing with Percoll 90% (1:1) and subjected to gradient centrifugation at 30006g for 30 min at 4uC, adopting a slightly modified previous method [46]. PCR Identification and Sequencing Spore suspension (2610 8 spores in 0.25 ml of TE buffer) was mixed with an equal volume of glass beads (0.4 mm) in glass tubes measuring 10675 mm and shaken at maximum speed on a vortex mixer for 1 min. The homogenate was incubated with proteinase K (mixed with 300 ml of Tris-HCl (pH 9.5), 75 ml of 10% SDS, and 25 ml of 0.1% 2-mercaptoethanol) for 1 h at 56uC to release the DNA from the nuclei, following slightly modified published procedures [47,48]. The DNA was extracted using a QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions [49]. The small subunit (SSU) rRNA gene was amplified from each of the m-PX and m-SL isolates using the primer 18f 59-CACCAGGTTGATTCTGCC-39 and 1537r 59-TATGATCCTGCTAATGGTTC-39 designed by Baker and colleagues [50]. PCR amplification was carried out in a total volume of 25 ml using 100 ng of DNA, 20 pmol of each primer, 200 mM of each dNTP, 50 mM MgCl 2 with PCR buffer and 2.5 U of Taq DNA Polymerase (Promega, Madison, WI). Amplification was performed in a PTC-100TM Programmable Thermal Controller (MJ Research, Waltham, MA) for 40 cycles of 94uC for 1 min, 57uC for 1 min, and 72uC for 2 min. An aliquot of 5 ml from each reaction was run on a 1.2% agarose gel to visualize the PCR product. The PCR product of about 1.2 kb was purified using a QIAquick PCR Purification Kit (Qiagen, Valencia, CA) according to manufacturer's instructions, and sent to First Base Laboratories Sdn. Bhd. (Shah Alam, Malaysia) for sequencing. The microsporidial SSUrRNA gene sequences of the isolates (m-PX and m-SL) and the other 22 SSUrRNA gene sequences (Table 1) were aligned using CLUSTAL X [51]. Histopathological Examinations Larvae, 1st-3rd instar from the experimental infections, which have been infected with 1610 3 spore concentration, were used for histological examination. Larvae of SE and SL were dissected under sterile conditions and fixed in Carnoy's fluid, dehydrated in a graded series of ethanol solutions (70%, 80%, 95%, and absolute alcohol), and cleared in ethanol:butanol (1:1) for 2 h and absolute butanol for a further 2 h. There were then embedded in paraffin wax (58uC to 60uC) and different tissues cut into sections 0.7 mm thick. The cutting of sections, their staining and mounting procedures were carried out following a published work from our laboratory [45]. Experimental Infections All experiments were conducted in an air-conditioned laboratory (25uC65uC, with a photoperiod of 12 L: 12 D, and 60-80% relative humidity. Four doses of m-PX (1610 2 , 1610 3 , 1610 4 , and 1610 5 spores/ml) were prepared according to a previous work [52]. Each dose was dispensed onto 161 cm 2 Nosema-free mustard leaves and placed in 24-well plastic plates. The larvae were then allowed to feed on the spore-contaminated mustard leaves as a means of infection. Control larvae were fed mustard leaves minus the spores. Data Collection and Statistical Analysis The types of m-PX and m-SL spores were determined by observing fresh and Giemsa-stained slides under a light microscope at 4006 and 10006 magnification (Olympus BX43 light microscope equipped with an Olympus DP72 10 megapixel camera). The size of spores was measured using a micrometer. The spore concentrations at the stipulated times were also estimated by scarifying a certain number of the infected larval instars. Final observations were carried out on the 15th day postinfection. The tissues (gut, Malpighian tubules, fat body, ganglion, and gonads) were inspected for the presence of spores and pathological effects. For both spodopteran species, larval and/or pupal deaths as well as adult emergence were recorded in both control and contaminated groups. The spore concentration was determined using a hemocytometer, following a published work [53]. The development and mortality rate of the larvae were monitored and recorded for 5 days beginning at 24, 48, 72, 96, and 120 hours post-infection (hpi). Phylogenetic analysis based on the resultant alignment was performed using the neighbor-joining (NJ) algorithm for distance analysis (Kimura two-parameter distances) with parsimony determined using Phylogenetic Analysis Using Parsimony (PAUP) 4.0b8 [54]. One thousand bootstrap replicates were generated to test the robustness of the tree. Sequence homology analyses were performed using BLAST database searches. The sequence from Nosema bombi (Accession No. AY741104) was used as an out-group. The differences in mean spore type sizes between PX and SL were examined by analysis of variance (ANOVA) using the MiniTab statistical software v. 16, with P , 0.05 taken to indicate statistical significance. Spore Shape and Dimensions The spores isolated from the diamondback and leaf armyworms differed in both shape and size: those of the PX isolate were ovocylindrical in shape, 3.16760.21 mm in length, and 1.6160.115 mm in width, while those of SL microsporidium were oval shaped, 3.0060.10 mm in length, and 2.4160.11 mm in width. There was a significant difference between the two lepidopteran species in mean spore size (F = 187.43, df = 34, P = 0.0001) ( Table 2). PCR Amplification and Sequence Analysis Amplification of DNAs from m-PX and m-SL isolates with the 18f/1537r primer set yielded amplicons in both cases. The gene was located between nucleotides 2677 and 3908 relative to the 59 end of the rRNA gene. Both amplicons (m-SL and m-PX) were about 1200 bp in length, suggesting that these isolated spores were microsporidia (Figure 1). Phylogenetic Analysis The G+C content of the small subunit (SSU) rRNA gene was 33.9%. Phylogenetic analysis suggested that all 24 microsporidia could be divided into two distinct clades: clade I consisting of microsporidia isolated from lepidopterans only, and clade II consisting of microsporidia isolated from amphipods, lepidopterans, and decapods. Neighbor-joining (NJ) tree also grouped the m-PX sequence with the other three microsporidia: Vairimorpha sp. (AF124331), Vairimorpha imperfecta (AJ131646), and Nosema plutellae (AY960987). All of these microsporidia were isolated from PX. All sequences of N. bombycis1-6, including m-SL, were grouped together under clade I (microsporidia isolated from lepidopterans only), although the strains were isolated from different locations throughout the world. These results suggested that m-SL spores are closely related to N. bombycis (Figure 2). Histopathological Analysis Severe microsporidian infections were noticed in distinct larval tissues. The intestinal cells were severely infected, but the infection also extended to other tissues such as body fat, ganglia, and gonads of SE and SL. More severe infection was observed on SL tissue with perforation of the entire wall of the intestine and breakdown of the mucosal barrier ( Figure 3). No spores were found in the muscles of either insect larvae. Also, no octospores were detected in infected tissues of spodopteran larvae. PX Infection Dynamics in Spodopterans When SE larvae were fed on PX spore-contaminated diet, infection occurred within 3 days post-feeding (72 hpi). Infection gradually increased thereafter for all four spore doses, reaching peaks on day 5 (120 hpi) post-meal uptake. The increase in infection was more pronounced at the two highest spore concentrations (1610 5 and 1610 5 spores/ml). However, in the control larvae maintained on the spore-free diet, infection level remained low even as time progressed ( Figure 4A). When the beet armyworm larvae were provided access to the contaminated meal, infection occurred within the same time frame as observed previously, but peaks were attained at 96 hpi and 46 hpi post-and feeding for the first three and highest doses, respectively. Spore production subsequently dropped sharply. The control group showed a similar pattern of infection dynamics, but at far lower spore numbers ( Figure 4B). PX Infection and Spodopteran Mortality Responses The first larval death occurred on day 4 post-infection in SE and two days earlier in S. litura. All SE-infected larvae died before and after 15 days post-infection, whereas those of SL succumbed between the 6th and 15th days post-infection. Pupation occurred in both spodopteran species, but in contrast to the control groups, no spodopteran-infected larvae reached the adult stage. It was clear from these observations that the isolate from PX can produce infection in these two spodopteran pests (Figures 5 A and B). Discussion The SSUrRNA genes of microsporidia are highly conserved, making them useless for distinguishing between very closely related species, even those that can be distinguished on morphological criteria, but they are still useful for genus-level identification [46,48]. The SSUrRNA gene had a G+C content of 33.9%. Phylogenetic analysis suggested that all 24 microsporidia could be divided into two distinct clades: clade I consisting of microsporidia isolated from lepidopterans only, and clade II consisting of microsporidia isolated from amphipods, lepidopterans, and decapods. Both m-SL and m-PX sequences were grouped together in clade I that contains microsporidia belonging to the genera Nosema, Endoreticulatus, and Vairimorpha. Thus, this study confirmed that these two isolates were members of the N. bombycis complex and belonged to the same genus, i.e., Nosema, but were from different species [48]. The neighbor-joining (NJ) tree also grouped the m-PX sequence with the other three microsporidia: Vairimorpha sp. (AF124331), V. imperfecta (AJ131646), and N. plutellae (AY960987). All of these microsporidia were isolated from P. xylostella-Vairimorpha sp., V. imperfecta, and N. plutellae were isolated from diamondback moths from Germany, Malaysia, and Taiwan, respectively. Similar results were reported previously by Solter and colleagues [55], who also suggested that the specificity of a parasite toward the host is limited by geographical distance. Different species of microsporidia should infect the same species of host in different localities. A study by Ku and co-workers [56] on microsporidia isolated from PX from Taipei and Taiwan suggested that the parasites were N. bombycis and N. plutellae, respectively. All sequences of N. bombycis1-6, including m-SL, were grouped together into clade I (microsporidia isolated from lepidopterans only) although the strains were isolated from different locations throughout the world. This suggests that m-SL spores are closely related to N. bombycis. It is interesting to note that no spores have been detected in muscle tissue samples from both SE and SL. A similar observation was reported previously in Malaysia [57]. This author investigated the dissemination pattern of microsporidian infection in the body of wild armyworm larvae. Histopathological observations revealed severe infection of the fat body and gonads, mild infection of the Malpighian tubules, gut and neural epithelia and no muscular infection. They suggested this pattern of tissular infection to be that of Nosema sp. We have also found no octosporoblastic development forming eight uninucleate spores during the infection process by the PX microsporidium. This single life cycle is likely to provide further evidence that PX microsporidium belonged to the genus Nosema, and were closely related to Nosema plutellae [46]. The spodopteran pests were susceptible to a microsporidium isolate from wild diamondback moths. Spores were produced by both SL and SE, with greater levels of production in the second species. Larval mortality occurred in both the leaf and beet armyworms. No adult emergence took place among the microsporidium-infected spodopteran larvae, while most control larvae emerged as adults. Although there have been few studies to examine the susceptibility of lepidopteran pests to non-symbiotic microsporidia, Solter and his colleague [58] tested the pathogenicity of microsporidia from native insects, including moths, on a non-natural on non-natural lepidopteran hosts. They found no transmission in their study. Rather than evaluating only host specificity in non-natural microsporidium-moth systems, we examined the susceptibility of a well-established invasive moth to a microsporidium isolate from a sympatric lepidopteran. A distinct pattern of infection of m-PX in these two pest species was observed. Until day 5, i.e., 120 hpi, all infected groups of SE larvae showed an upward trend or increasing degree of infection, especially after 96 hpi with the presence of many spores in tissues. Spore concentration reached the maximum after 120 hpi. The highest dose of infection (1610 5 spores/ml) resulted in the highest spore concentration, and caused one death after 96 hpi. All infected instars in all groups succumbed to this infection after two weeks post-infection, with the majority reaching the pupal stage only. On the other hand, control larvae showed natural infection with the parasite but it was to a low degree, and they survived the infection, successfully undergoing metamorphosis to the adult stage. These results indicated that SE was susceptible to m-PX infection and that a low degree of infection in nature was a common phenomenon that normally did not cause death of the insect pests [59]. It is possible that death of the infected instars was due to the large number of spores, which caused severe tissue damage [60]. In infected larvae of SL, the patterns of m-PX infection were different. Spore concentrations in all groups of infected larvae increased markedly until 72 or 96 hpi, after which they declined sharply to a minimal level of less than 2610 3 spores/ ml. Control larvae also showed the same pattern of infection, but with lower spore burdens. Interestingly, during this episode of infection, mortality did occur in the infected larvae as early as 48 hpi. Although larvae succumbed after 120 hpi and before the 15th day post-infection, most of the infected larvae died at the pupal stage, while control larvae successfully reached the adult stage. These results suggested that both pest species are naturally susceptible to m-PX but with different patterns of infection. During the course of infection, SL larvae mounted vigorous immune responses that were likely capable of clearing infection in some parts of their body, as suggested by the decrease in number of spores. Despite the ability to mount an immune response against the parasite, most infected larvae died. The strategy used by microsporidia to invade the host is dependent on the ability of their polar tube to rapidly extrude to allow injection of the infectious spore contents into the target cell [61,62]. Many factors come into play when considering interactions between parasite and host cell components. In particular, bacteria and their metabolites have marked influences on microsporidian infection, as indicated by a recent study in which Porrini and co-workers [63] investigated the antiparasitic activity of bacterial metabolites from Bacillus and Enterococcus strains on Nosema ceranae-infected bees. Their results indicated that spores exposed to direct contact with a particular surfactin showed significantly reduced infectivity. Although we did not determine whether the experimental hosts were infected with bacteria, is it important to note that the PX isolate was highly infectious for both SL and SE larvae, with infection peaks occurring between 36 and 120 hpi and first mortalities on day 2 or 4 post-infection. The premise of biological agent use against the larval stage is to reduce the target insect pest population densities by preventing a number of larvae from completing development. Krieg and his colleague [64] defined a good microbiological agent as one that is highly infectious. Further, Mewis and co-workers [33] examined the pathogenicity and transmission of a microsporidium against the lepidopteran pest Hellula undalis. They found that infection by feeding artificial diet containing spores to 3rd instar larvae resulted in 100% infection with a final mortality rate of 80%. They also noticed that most larvae died before reaching the pupal stage and 40% died 3 days after spore uptake. These observations prompted them to consider the microsporidium, Vairimorpha sp., as a potential microbiological agent. Based on the prerequisite of high infectivity for a good microsporidium agent discussed by others [64] and the results of the above-mentioned study, the described Nosema sp. of PX has potential for use as a microbial insect pest control agent. The diamondback moth has been considered as the most destructive pest of crucifers in Asia [65], including Malaysia. In this country, this pest attacks cruciferous vegetables, which are also infested by SE [66] and SL [67]. Ranked as a key pest in neighbouring Thailand, the armyworm has been reported to infest legumes and brassicas in Malaysia [66]. Besides chewing leaves, SL also infest tubers and roots of crops [68], and can block pod maturation of groundnuts [69]. Efforts to control the diamondback moth have been aimed through a variety of methods, but mainly by pesticides [70,71]. Although increased frequency of insecticide sprays have been sometimes successful, the hopefulness that this pest can be effectively controlled by this strategy was not realized because it has developed widespread resistance to almost all insecticide classes [72]. In Malaysia, the intensive use of insecticides has resulted in the reduction of PX population size in some areas, but concomitant to this decrease, there has been an increased prevalence of spodoptera pests, in particular SL [66]. A similar observation has been reported earlier in Southeast Asia [73,74]. Originating from outside, the armyworm has become well-established in most of the Malaysia's vegetable production sites [16,17], where it has acquired important crucifer pest status during the 1990s [66]. As both PX and SL infest cruciferous vegetables, it is likely the competitive interactions occur between their populations. In addressing the ecology of invasive insect pests, Juliano and his colleague [34] claimed that invasive species may spread into new areas by occupying previously unoccupied habitat and that invasion result in declines or elimination of ecologically similar native species. They also argued that nonnative species may not expand over a limited area because they are not effective competitors, with competition from residents apparently contracting their dissemination. In approaching this issue, Lim and co-workers [66] pleaded that invasive pests can only be capably controlled if there also exist their complement of effective natural enemies. Our results clearly demonstrated that the PX isolate is highly infectious and insecticidal to the studied spodopteran species. The microsporidian infections and the noctuids's mortality patterns obtained strongly support the suggestion that the parasite can be useful in managing these pests. At time when there is still no definitive and efficient chemical insecticide strategy to control them, exploring the microsporidium isolate from the diamondback moth may aid in sustainable vegetable production and to combat biodiversity loss in Malaysia and other countries with similar issues. This study was carried-out to assess the effects of a Nosema species isolated from wild diamondback moths on developing larvae of two spodopteran pests with respect to the potential use as a control agent. In addition to providing insight into the transmission and pathogenicity of microsporidia into non-natural lepidopteran hosts, this study suggested that Nosema sp. of PX may be useful to reduce spodopteran species populations. Our results clearly indicated the efficacy of the parasite to infect and kill the larvae of SL and SE. Tissular infection was various diverse and resulted in appreciable spore productions and larval mortalities. These observations suggested that the PX isolate may be useful in managing these noctuid pests. Despite encouraging results there are still many unknowns such as the effects of the PX isolate on humans-many microsporidia emerged as important opportunistic pathogens in humans [75] and nontarget species of insects. In addressing the issue of the host specificity of insect pathogens, Solter and his colleague [76] claimed that generalist pathogens, introduced for biological control of a pest species, could theoretically become epidemic in nontarget species. They also argued that argued for the need to the potential host range and possible effects on other species of insects, before a pathogen is released for biological control purposes. This information is also important in obtaining regulatory approval for the tested microsporidium as a biological control agent and in understanding its evolutionary adaptability to new hosts [37]. Therefore, further research is required to evaluate the possible use of this microsporidium in a control program against spodopteran pests. In particular, studies to confirm its efficacy in both the laboratory and under field conditions, and to determine the influence of environmental factors on its pathogenicity as well as its host range, are mandatory.
2018-04-03T01:31:04.115Z
2013-12-11T00:00:00.000
{ "year": 2013, "sha1": "d76b5452c82d5786a3d6e5a4407447ad2bfbe420", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0081642&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d76b5452c82d5786a3d6e5a4407447ad2bfbe420", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
255900329
pes2o/s2orc
v3-fos-license
LAND SUBSIDENCE SUSCEPTIBILITY MAPPING USING MACHINE LEARNING ALGORITHMS : Land subsidence (LS) is one of the most challenging natural disasters that has potential consequences such as damage to infrastructures and buildings, creating sinkholes, and leading to soil destruction. To mitigate the damages caused by LS, it is necessary to determine the LS-prone areas. In this paper, LS susceptibility was assessed for Kashan Plain in Iran using Random Forest (RF) and XGBoost machine learning algorithms. For the susceptibility analysis, twelve influential factors including elevation, slope, aspect, curvature, topographic wetness index (TWI), groundwater drawdown (GWD), normalized difference vegetation index (NDVI), distance to stream (DtS), distance to road (DtR), distance to fault (DtF), lithology, and land use were taken into account. 291 LS points were used in this study which was divided into two parts of 70% and 30% for training and testing the models, respectively. The prediction power of the models and their produced LS susceptibility maps (LSSMs) were validated using the Root Mean Square Error (RMSE), R-Squared (R²), and Mean Absolute Error (MAE) values. The results showed that the XGBoost had a higher R² equal to 0.9032 compared to that of the RF which was equal to 0.8355. XGBoost model had an RMSE equal to 0.3764 cm compared to that of the RF model which was equal to 0.4906 cm. MAE for the XGBoost model was 0.1217 cm and for the RF model was 0.3050 cm. Therefore, the achieved results proved that XGBoost had better performance in this research for predicting LS values based on the measured ones. INTRODUCTION Land subsidence (LS) is one of the most challenging natural disasters that has potential consequences of damage to infrastructures, creating sinkholes, leading to soil destruction and so on (Raspini et al., 2016;Shi et al., 2020).LS is an apparent and slow deformation or collapse of the earth surface which is caused by a number of natural and human factors (Ng et al., 2015;Zhou et al., 2017).LS can be a natural result of a number of natural and man made disasters such as earthquakes, dissolution of carbonate rocks, movement of faults, or an increase in the depth of groundwater (Arabameri et al., 2021a;Yang et al., 2012).LS has become a global threat, which has occurred in various countries such as China, Mexico, Italy, the United States of America, Spain, and Iran (Brown and Nicholls, 2015;Chaussard et al., 2014;Corbau et al., 2019;Galloway and Burbey, 2011;Tung and Hu, 2012).In recent decades, the rate of LS in Iran has increased widely (Motagh et al., 2008;Tarighat et al., 2021).One of the greatest causes of LS in Iran is the indiscriminate exploitation of groundwater for agricultural purposes (Foroughnia et al., 2019;Mohammady et al., 2019).Water is essential to sustain life on earth.However, its availability is not the same in space and time.Groundwater meets a major part of water demand, and the increasing dependence on this source has led to the depletion of groundwater in different * Corresponding author parts of the world.For decades, groundwater has been widely exploited for domestic, agricultural, and industrial purposes (Foroughnia et al., 2019;Mohammady et al., 2019).This requires artificial recharge to balance groundwater depletion and control LS.Water resources are a critical requirement for a sustainable food supply.The increase in temperature and the change in precipitation patterns in space and time due to climate change lead to frequent and severe droughts and floods, which reduce the ability to absorb and store water (Mirza, 2003).Population growth and the industrialization of societies have increased the demand for water resources and this trend will continue, as increasing amounts of water are needed to sustain societies (Dalin et al., 2017;Wada et al., 2010).Faced with the growing demand for access to fresh water, the use of groundwater is used as a vital resource to meet agricultural, industrial, and drinking water demands.The increase in water demand has led to a decrease in groundwater in many parts of the world including Iran (Foroughnia et al., 2019;Mohammady et al., 2019).In many areas, this issue has led to LS, which causes permanent loss of underground water storage (Smith et al., 2017), damage to infrastructure and arsenic contamination (Erban et al., 2013;Smith et al., 2017).Advances in remote sensing, geospatial information system (GIS) spatial analyses and artificial intelligence (AI), have helped in the modeling of several natural hazards such as LS to determine the LS prone areas.Machine Learning (ML) methods are particularly important in natural hazard modeling due to their capacity to handle complex real world problems, as well as due to their high accuracy and efficiency.(Arabameri et al., 2021a;Chen et al., 2019;Ebrahimy et al., 2020;Feng et al., 2020;Lee et al., 2012;Yang et al., 2012).Previous research has verified that twelve factors that have the most important influence on LS are elevation, slope, aspect, distance to road (DtR), distance to fault (DtF), distance to stream (DtS), groundwater drawdown (GWD), normalized differentiate vegetation index (NDVI), topographic wetness index (TWI), curvature, lithology and land use (Arabameri et al., 2021b;Ebrahimy et al., 2020;Ranjgar et al., 2021).ML is a branch of AI (Zhou, 2021).ML has been used in various fields such as landslide (Arabameri et al., 2020), effects of climate change on the environment (Seyed Mousavi et al., 2022) and so on.In the previous study of LS, ML has been widely used to determine the relationship between the influencing factors and subsidence of the area (Ranjgar et al., 2021;Shi et al., 2020) and to estimate and predict LS (Mohammady et al., 2019;Ranjgar et al., 2021;Shi et al., 2020).In previous studies, the XGBoost model was once used to model the LS of the Beijing Plain, China (Shi et al., 2020), however, the influencing factors used in this study except groundwater level had not been taken into consideration.In this research, we have considered these influential factors as well in the modeling phase.Ensemble learning algorithms such as XGBoost and Random Forest (RF) improve the performance of the model by reducing the overall error rate (Zhou, 2012).Compared with traditional ML models such as SVM and ANN, XGBoost and RF have faster calculation speed and compared with deep learning algorithms, these models are good for tabular METHODOLOGY The research methodology consists of four steps as follow which are illustrated in Figure ( 2): • Selecting 291 LS points.data with fewer features (Shi et al., 2020).In this study, We have considered the twelve influencing factors in LS, and also a number of the sample points of subsidence were collected using radar interferometry.The data was divided into two parts, including training and test to train XGBoost and RF models and evalaute their results.The final evaluation of the model was undertaken using Root Mean Square Error (RMSE), R-Squared (R²), and Mean Absolute Error (MAE) values.Finally, the LS susceptibility map of the study area was produced from the two models and a comparison was made between them.The remaining parts of the paper is as follows.Section 2 has concentrated on the discription of study area.Section 3, presents the research methodology.Section 4 elaborates the research results.Finally, section 5 concludes the paper and suggests some directions for future research. (2) (Beven and Kirkby, 1979) The TWI map was produced using the DEM on the ArcGIS 10.3 -ing data (Breiman, 2001).Grid Search was used to adjust the parameters.n_estimator is the number of trees in the RF, and max_features is the number of features to consider when looking for the best split (Huang et al., 2016). XGBoost regression XGBoost is one of the quickest implementations of gradient boosted trees (Lu and Ma, 2020).XGBoost is an iterative decision tree algorithm, which uses residuals to improve the model.First, XGBoost supports parallel computing, second, it also supports regularization, which prevents model overfitting (Huang et al., 2022).Although the model is highly accurate, it platform whose value ranges from 2.33 to 18.07.The Lithology map of the study area was produced by GSI (Figure 3i).There are ten different classes of Geo Unit in this area (Table 2).The curvature map was shown in Figure (3e) whose values range from −12.16 to 10.24.Finally, the map layers had created in grids of 20 Meter * 20 Meter size in order to harmonize the data. RF regression RF is developed by Breiman (Breiman, 2001;Wang et al., 2020).RF Regression is a supervised ML decision tree-based algorithm, where the decision trees form with random samples from the train can be easily overfit.For this purpose, n_estimators must be Controlled.The eta is used to control the rate of iterations and prevent overfitting.Subsample controls the proportion of the extraction of example.xgb_model parameter is used for selecting a weak evaluator.The objective, max_depth, the alpha and lambda are used to select the loss function, specifies the maximum depth of each tree, control L1 and L2 regularization terms, respectively.We used Grid Search method to adjust the parameters (Table 3).Finally, to find the most important influential factors of the LS susceptibility, mean decrease in impurity (MDI) was calculated for RF (Figure 4) and XGboost models (Figure 5).The results showed that DtF, elevation and GWD hold the greatest impact on the LS occurrence. (2) Validation To assess the efficiency of the models, R², RMSE, and MAE were employed (Equations (3-5)).R-squared will give an estimate of the relationship between movements of a dependent variable based on an independent variable's movements.It represents the possible bias in the data and predictions.It does not mean whether the selected model is good or bad.The closer the R-squared to one, the better (Cameron and Windmeijer, 1997). Root Mean Square Error (RMSE) is the standard deviation of the residual (prediction errors) which is one of the most commonly used measures for evaluating the quality of predictions.It tells how concentrated the data is around the line of best fit.Naturally lower values indicate a better fit for the model (Barnston, 1992). MAE is the mean of the absolute errors which is the absolute value of the difference between the predicted and the measured values (Eq.5) (Willmott and Matsuura, 2005). MAE = ∑ �y � i -y i � n i=1 n where y � i = vector of predicted dependent variables with n data points y i = vector of observed values of the variable being predicted y � i = mean of the observed dependent variables RESULTS AND DISCUSSION By utilizing the twelve influential factors maps produced, the selected LS points and the methods mentioned in Section 3.3, the mapping and assessment of LS susceptibility for Kashan plain were undertaken.RMSE, R², and MAE were calculated for RF and XGboost models.Table (4) demonstrates the comparison of RMSE, R², and MAE for each models used.The results showed that the XGBoost had higher R² (0.9032) compared with that of the RF ( 0.8355).XGBoost model had less RMSE (0.3764 cm) than that of the RF model (0.4906 cm).MAE for the XGBoost model was equal to 0.1217 cm and for the RF model was equal to 0.3050 cm. Figure ( 6) demonstrates the compatibility between the measured data and the predicted data.As a result, the XGBoost model has a higher prediction accuracy than that of the RF model. Results of the LS susceptibility Maps (LSSMs) After applying the models and evaluating the accuracies, the maps of the twelve influencing factors, in the form of a stack on QGIS 3.16 platform was produced to be used as the model input.Then the LSSMs were produced.The values of the LSSMs prepared from the models were in the range of 1.441 cm to -7.497 CONCLUTION Iran Natioal Cartographic Center (NCC) reports show that the annual average subsidence rate has increased in Iran (https://www.ncc.gov.ir/en/).therefore, LS is an important issue in Iran.LS was affected by a number of factors.We selected the most important influencing factors to predict the LS rate in the Kashan Plain, and investigate the relationship of parameters in LS modeling.The conclusions are as follows: • Phenomenon of LS is one of the most threatening natural hazards of the earth, which has great losses on the economy. For proper assessments of this issue, it is necessary to develop a suitable model of LS that can be used in any In previous studies, the XGBoost model was once used to model the LS of the Beijing Plain, China (Shi et al., 2020), however, the influencing factors used in this study except groundwater level had not been taken into consideration.In this research, we have considered these influential factors as well in modeling.The achieved results prove that the model is well established.• The results have indicated that the XGBoost model had less RMSE (0.3764 cm) than that of the RF model (0.4906 cm).MAE for the XGBoost model was equal to 0.1217 cm and for the RF model was equal to 0.3050 cm.XGBoost had higher R² which was equal to 0.9032 compared to that of the RF which was equal to 0.8355 indicating better compatability between the predicted and measured LS. • As can be seen in both of the models used in this study, the highest rate of LS is in the northwest and west of Kashan Plain and the lowest rate of LS is observed in the south of Kashan Plain.In addition, in places where the DtR, the DtF and DtS are less, a higher LS rate has been observed.According to the GWD map, in the southwestern and the northwestern parts of the study area, the maximum GWD can be observed.Furthermore, in the southwest of Kashan plain, a high subsidence rate has been occured.• The major strength of this study was quality of the ensemble ML algorithms and their optimum prediction results in the LS mapping.There were some limitations in this research such as lack of implementing hydrological modelings which will be considered in our future research. Figure 1 .Figure 1 . Figure 1.Map of the study area and location of the employed LS sample points AREA Kashan plain is a part of Kashan city, which ends at Karkas mountain from the south and is located about 240 kilometers south of Tehran between the Longitudes of 51.05 and 51.54 degrees and the Latitudes of 33.45 and 34.23 degrees (Figure 1).Kashan plain with an area of 1570 Square Kilometers includes the city of Kashan, its central part, the cities of Aran and Bidgol and agricultural lands located in the plain.Kashan Plain is one of the least rainfall regions of Iran.The climate of the study area has two classes of arid and semi-arid.The temperature in this area ranges from 16 •C to 22 •C .Moreover, Elevation is between 799 m and 1336 m above mean sea level.• Data preprocessing and production of maps of the twelve influencing factors in LS. • Employing RF and XGBoost ML algorithms to map the LS susceptibility.• Validation of the performance of each model using RMSE, R² and MAE.3.1 Selecting LS points the spatial distribution of several LS regions was shown in Figure (1).The interferometric sysnthethic aperture radar (InSAR) map produced by the Geological Survey and Mineral Exploration of Iran (GSI) with centimeter accuracy in the first half of 2016 was used to prepare sample points of subsidence in the study area. local upslope area draining through a certain point per unit contour length b = the local slope in radians Figure 6 . Figure 6.Compatibility between the measured data and the predicted data (a) XGBoost, (b) RF. Table 2 . Lithology of the study area. Table 2 . Lithology of the study area.
2023-01-17T16:28:37.256Z
2023-01-13T00:00:00.000
{ "year": 2023, "sha1": "4d76409aad9b335483c14b368840d4446983dcd3", "oa_license": "CCBY", "oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/X-4-W1-2022/129/2023/isprs-annals-X-4-W1-2022-129-2023.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "df73ef72588fe9617ae2656f8d16bd91f51754d0", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
51956937
pes2o/s2orc
v3-fos-license
Ammonia Storage by Reversible Host–Guest Site Exchange in a Robust Metal–Organic Framework Abstract MFM‐300(Al) shows reversible uptake of NH3 (15.7 mmol g−1 at 273 K and 1.0 bar) over 50 cycles with an exceptional packing density of 0.62 g cm−3 at 293 K. In situ neutron powder diffraction and synchrotron FTIR micro‐spectroscopy on ND3@MFM‐300(Al) confirms reversible H/D site exchange between the adsorbent and adsorbate, representing a new type of adsorption interaction. Synchrotron Infrared Micro-spectroscopy Infrared micro-spectroscopy experiments were carried out using the B22: Multimode Infra-Red Imaging and Microspectroscopy (MIRIAM) beam line at the Diamond Light Source, Rutherford Appleton Laboratories (UK). The instrument is comprised of a Bruker Hyperion 3000 microscope in transmission mode, with a 15x objective and liquid N2 cooled MCT detector, coupled to a Bruker Vertex 80 V Fourier Transform IR interferometer using radiation generated from a bending magnet source. Spectra were collected (512 scans) in the range 500-4000 cm -1 at 4 cm -1 resolution and an infrared spot size at the sample of approximately 20 × 20 µm. A microcrystalline powder of MFM-300(Al) was placed onto a S9 ZnSe disk and placed within a Linkam FTIR 600 gas-tight sample cell equipped with ZnSe windows, a heating stage and gas inlet and outlets. The N2, NH3 and ND3 were pre-dried using individual zeolite filters. The analysis gases were dosed volumetrically to the sample cell using mass flow controllers, the total flow rate being maintained at 100 cm 3 min -1 for all experiments. The gases were directly vented to an exhaust system and the total pressure in the cell was therefore 1 bar for all experiments. The sample was desolvated under a flow of dry N2 at 100 cm 3 min -1 and 393 K for 3 h. The sample was then cooled to 293 K under a continuous flow of N2. Dry NH3 was then dosed as a function of partial pressure, maintaining a total flow of 100 cm 3 min -1 . The sample was then regenerated with a flow of dry N2. To investigate the H→D exchange reaction, a flow of ND3 was introduced within the cell at a flow rate of 100 cm 3 min -1 at 293K for 1hr. N2 flushing was then repeated at 100 cm 3 min -1 , a scan taken and D→H exchange was implemented with a flow of NH3 at 100 cm 3 min -1 . The spectrum was then observed after a final N2 flushing at 100 cm 3 min -1 . A graph of ln(p) versus 1/T at constant loading allows the differential enthalpy and entropy of adsorption and also the isosteric enthalpy of adsorption (Qst,n) to be determined. Four example fittings are displayed in Figure S1. The calculated R 2 value for each fitting is > 0.99 indicating a reliable fit. Dual-site Langmuir-Freundlich fittings and IAST selectivity of NH3 vs CO2, CH4 and N2. Adsorption isotherms of NH3, CO2, N2 and CH4 in MFM-300(Al) at 293K were fitted with the dual-site Langmuir-Freundlich model (Equation 2), where n is the loading in mmol g -1 , P is the pressure in bar, qsat1 is the saturation capacity in mmol g -1 , b1 is the Langmuir parameter in bar −1 , and v1 is the Freundlich parameter for two sites 1 and 2. All R 2 values for the fits are >0.999 confirming they fit the model well. Ideal adsorbed solution theory (IAST) [4] was used to determine the selectivity factor, S, for binary mixtures *-data not available. avalues estimated from published N2 isotherms at 77K. Ammonia cycling stability determined by in situ high-res PXRD Five separate cycles of dosing and removal of NH3 in MFM-300(Al) were studied at beamline I11, DLS. This was to determine the crystallographic stability of the material to repeated exposure to NH3 and whether it had any detrimental structural impact. Figure 1 shows the normalised diffraction patterns for this experiment and we observe no significant structural changes in MFM-300(Al) over 5 repeated cycles of NH3. To further ratify this, a full width at half maximim (FWHM) analysis on the (110), (211) and (112) peaks was undertaken. These peaks were selected for their relative intensity within the diffraction pattern and the different direction each plane is oriented. No significant peak broadening is observed over 5 cycles indicating that NH3 is not causing structural changes ( Figure S4). 2 / º S13 therefore propose that the first site is filling almost exclusively, indicated by the cell contraction. After this point, the pores fill to saturation causing an expansion in cell volume. Figure S11: View of the active binding site determined via NPD data of the MFM-300(Al)·1.5ND3 (4) *-Information not available
2018-08-14T20:56:46.691Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "2626d10bb0e1c0af8930e30fc5151cbae9746500", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.201808316", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "eb1e5e4af91ec7cc09857f1684aad3964c2017f0", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
270173468
pes2o/s2orc
v3-fos-license
Occult middle meningeal artery to middle cerebral artery anastomosis associated with prior trauma Summary The report describes a patient who presented with traumatic right temporoparietal calvarial fracture with chronic right subdural haematoma who underwent right middle meningeal artery embolisation with n-BCA during which direct filling of an anterior temporal branch of the middle cerebral artery was observed. BACKGROUND Chronic subdural haematoma (cSDH) is one of the most common diagnoses in neurosurgery with increasing incidence in the ageing population. 1 Medical management of patients who do not require emergent surgical evacuation includes observation, statins, steroids, tranexamic acid and surgical approaches to cSDH including craniotomy, burr holes and subdural evacuating port systems.Despite how common cSDH is, there exist no consistent evidence-based guidelines or indications for which type of intervention patients should undergo.While surgical approaches are effective in relieving the mass effect of subdural blood, it does not address the underlying pathophysiological mechanisms (ie, bleeding from the outer membrane of dura mater) thought to underly subdural haematoma formation, and recurrence is common. 2iddle meningeal artery (MMA) embolisation has emerged as a promising treatment strategy for patients with cSDH, by addressing the inflow to the dural membranes via the meningeal arteries.Ban et al report a treatment failure rate of 2.2% in symptomatic patients with cSDH treated with MMA embolisation, compared with a 27.5% of failure rate derived from a large historical control group (comprised of 469 patients) who were managed with conventional medical and surgical interventions. 2 To date, only one randomised controlled trial evaluating MMA embolisation for cSDH compared with conservative management has been published. 3n this study, Lam et al included 36 patients who required surgical evacuation of subdural haematoma, 19 of which underwent subsequent MMA embolisation.While functional outcomes in the embolisation group were improved compared with control group, the study failed to meet the primary outcome of improved symptomatic recurrence requiring re-do surgical evacuation given the small sample size, with three patients in the control arm requiring repeat surgery, and no patients who underwent MMA embolisation requiring repeat surgery. Notably, the choice of embolic material for MMA embolisation widely varies, and studies evaluating polyvinyl alcohol particles, ethylenevinyl alcohol dissolved in dimethyl-sulfoxide (Onyx; Medtronic Neurovascular, Irvine, California, USA), ethylene vinyl alcohol copolymer with suspended micronised tantalum dissolved in dimethyl sulfoxide (Squid; Balt, Montmorency, France), PHIL liquid embolic (MicroVention, Aliso Viejo, California, USA), n-butyl cyanoacrylate (n-BCA) and endovascular coils with and without gelatin sponge, have all been described in the literature. 4Meta-analysis comparing the choice of embolic for MMA embolisation has insufficient available data to reach a conclusion regarding efficacy differences between the agents. 5egardless of embolic choice, identification of extracranial to intracranial anastomoses prior to MMA embolisation is paramount to performing a safe procedure. 6Persistent connections between extracranial and intracranial circulation are relatively common and are broadly categorised into orbital, petrocavernous and upper cervical connections.Identification of persistent connections allows for avoidance of embolic stroke and cranial nerve palsies.Additionally, it Case report has been shown that head trauma can result in MMA pseudoaneurysms 7 and dural arteriovenous fistulas 8 9 which require special attention to avoid complications during MMA embolisation.However, to our knowledge, head injury resulting in direct meningeal to pial anastomosis of the MMA to middle cerebral artery (MCA) has never been described in the literature. CASE PRESENTATION This patient is in his 60s with no known medical history and presented to an outside hospital following an unwitnessed fall.On arrival, the patient was confused, though responsive and following commands and without focal neurological deficit (Glascow Coma Score of 14).Initial head CT was significant for acute bilateral, right greater than left, subdural haematomas, scattered subarachnoid haemorrhage and right temporoparietal bone fracture along the groove for the posterior division of the MMA with adjacent haemorrhagic contusion.Subsequent head CT 24 hours after admission was stable and the patient was managed conservatively, with discharge to rehabilitation at our facility.Head CT was obtained on post-fall day 19 while in rehabilitation that showed expansion of the right-sided subdural haematoma with mixed density blood products (15 mm from 6 mm, figure 1).The repeat head CT was performed to assess haematoma resolution/reaccumulation and guide management as is standard at our institution, and not secondary to any decline in neurological status.At the time of repeat head CT, the patient demonstrated decreased executive functioning, slowed processing speed with Orientation Log score of 24/30.Family members also noted improved cognition compared with day of presentation during his stay in rehabilitation, but was noted not to be back at this baseline at the time of repeat imaging. TREATMENT Given the expansion of the right cSDH and persistent symptoms, the decision was made to perform MMA embolisation.The right MMA was selectively catheterised distal to the petrosal branch and revealed two pseudoaneurysms within the parietal branch (figure 2A,B).Selective angiogram performed from the parietal branch (figure 2C,D) did not show any intracranial or orbital anastomoses so the decision was made to perform glue embolisation with 25% n-BCA, which is standard practice at our institution. 10Initially, glue was seen filling the proximal aspect of the artery with subsequent direct filling of an anterior superior temporal branch of the middle cerebral with glue coursing medially (figure 3, online supplemental video).Injection was immediately stopped and the catheter was withdrawn under aspiration with a total of 0.4 cc of n-BCA instilled.Attention was then turned to the anterior division of the MMA, and 1.0 cc of glue was injected with filling of right frontal and parietal meningeal vessels with good contralateral penetration.A comparison of pre and post right common carotid artery angiograms confirmed glue within anterior temporal branch of the right MCA (figure 4). OUTCOME AND FOLLOW-UP Postprocedurally, the patient had an unchanged neurological status and returned to his inpatient rehabilitation centre.He underwent a non-contrast head CT following embolisation on postprocedure day 1 which confirmed glue embolic material within the right anterior superior temporal lobe (figure 5). While in the rehabilitation centre, the patient demonstrated ongoing improvement in cognitive function with Initially glue is seen filling the proximal aspect of the MMA and the pseudoaneurysms, with subsequent direct filling of an anterior temporal branch of the middle cerebral with glue coursing medially.Injection was immediately stopped with a total of 0.4 cc of glue instilled (online supplemental video).MMA, middle meningeal artery; n-BCA, 25% N-butyl cyanoacrylate. Case report discharge 2 weeks postprocedure.Outpatient follow-up head CT was performed 3 weeks post-embolisation that showed decreased thickness of subdural collection with decreased local mass effect and midline shift (figure 6).At the patient's follow-up clinic appointment 3 months following treatment, the patient was noted to have improved cognition, with desire to return to work. DISCUSSION Here, we describe the first known case of a post-traumatic meningio-pial anastomosis which was inadvertently embolised during MMA embolisation.This connection was not visualised on pre-embolisation superselective digital subtraction angiography performed with high pressure using a 1 cc syringe and for a prolonged duration as is typical for this procedure.In retrospect, however, inspection of pre-embolisation angiograms demonstrates apparent wash-out of contrast within the mid and distal portion of the parietal branch of the MMA, which could be explained by inflow from the subsequently embolised anterior temporal MCA branch (best seen in figure 2D). Overall, this report describes a traumatic anastomosis which is an entity to be aware of in the setting of MMA embolisation following trauma with a fracture overlying the course of the meningeal vessels and in the presence of pseudoaneurysms.Selective catheterisation of the internal carotid artery prior to embolisation may be warranted in such cases, in addition to consideration of coil embolisation or conservative management. [13][14][15] Learning points ► We describe the first known case of a post-traumatic meningio-pial anastomosis which was inadvertently embolised during middle meningeal artery (MMA) embolisation.► Traumatic middle meningeal artery to middle cerebral artery anastomosis is an entity to be aware of in the setting of MMA embolisation following trauma with a fracture overlying the course of the meningeal vessels and in the presence of pseudoaneurysms.► Selective catheterisation of the internal carotid artery prior to embolisation may be warranted in such cases, in addition to consideration of coil embolisation or conservative management. X Jennifer Morgan Watchmaker @jennwatch Contributors All authors were involved in the clinical care of the patient, gave final approval of the article are responsible for the drafting of the text, sourcing and editing of clinical images, investigation results, drawing original diagrams and algorithms, and critical revision for important intellectual content, and are accountable for the content of article and ensure that all questions regarding the accuracy or integrity of the article are investigated and resolved. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Consent obtained directly from patient(s). Provenance and peer review Not commissioned; externally peer reviewed. Figure 1 Figure 1 3D reconstruction of temporoparietal bone fracture (A).Sagittal 3D reconstruction of inner skull surface showing fracture through the parietal branch of MMA (B).Axial images of chronic right-sided subdural haematoma (C).MMA, middle meningeal artery.3D, three dimensions. Figure 2 Figure 2 Microcatheter injection from middle meningeal artery (A, B) and superselective injection from parietal branch (C, D).Two pseudoaneurysms arise from the parietal branch (arrows). Figure 3 Figure325% n-BCA glue injection into parietal branch of the middle meningeal artery; anterior-posterior projection (left panel) lateral projection (right panel).Initially glue is seen filling the proximal aspect of the MMA and the pseudoaneurysms, with subsequent direct filling of an anterior temporal branch of the middle cerebral with glue coursing medially.Injection was immediately stopped with a total of 0.4 cc of glue instilled (online supplemental video).MMA, middle meningeal artery; n-BCA, 25% N-butyl cyanoacrylate. Figure 4 Figure 4 Pre-embolisation and post-embolisation right common carotid artery digital subtracted angiogram confirmed glue within anterior superior temporal branch of the right middle cerebral artery (A, B).Post-embolisation unsubtracted common carotid artery angiogram with glue cast within anterior temporal middle cerebral artery branch vessel (C). Figure 5 Figure 5 Post-embolisation non-contrast head CT which confirmed glue embolic material within the right anterior (A) and superior temporal lobe (B). Figure 6 Figure 6 Immediate post-embolisation head CT (A, B) and followup head CT performed 3 weeks following embolisation (C, D).There is decreased thickness of subdural collection with decreased local mass effect and midline shift.
2024-06-02T06:17:29.364Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "19bfefa90a57fd4c5357d8bfb778f1da84196689", "oa_license": "CCBYNC", "oa_url": "https://casereports.bmj.com/content/bmjcr/17/5/e259436.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cb5497ce8134357b1b326c6b3c621615d9b55277", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225539921
pes2o/s2orc
v3-fos-license
Asset Tangibility and Financial Performance: A Time Series Evidence Since 1980s, firms’ asset investments have more or less tended to shift from tangible to intangible investments. This tendency is also valid for Turkish manufacturing firms. Despite the increasing importance of intangible asset investments, their role on financial performance is subject to considerable debate. These investments are riskier compared to tangible asset investments and cannot easily be used as collateral in corporate borrowing. Tangible assets offering high guarantee are pledged as the primary source of collateral in corporate borrowing. Consequently, a firm with higher asset tangibility is likely to lower external financing costs, leading higher financial performance. This paper analyzes the effect of asset tangibility on financial performance of Turkish manufacturing sector covering 1990.Q3-2016.Q4. The stationarity of series and the cointegration relationship among them are tested by ADF (1979; 1981), KPSS (1992), and Zivot and Andrews (1992) unit root tests, and one-break Gregory and Hansen (1996) cointegration test. Long-run coefficients estimated by Stock and Watson (1993)’s DOLS methodology posit that asset tangibility, financial leverage, liquidity and operating efficiency have significant and positive effects on financial performance till (and including) the break date. However, from this break date on, they affect financial performance negatively. Introduction The Conceptual Framework for Financial Reporting (2018) defines assets as "resources controlled by the entity (i.e. firm, as a type of entity) as a result of past events and from which future economic benefits are expected to flow to the entity". For accounting purposes, they are categorized as short-term versus long-term assets, and tangible versus intangible assets. Musah, Kong and Osei (2019) describe tangible assets as physical items of value that are not subject to sale for its customers and are used to generate income. Tangible assets can be classified into two types: (i) current assets -such as cash and cash equivalents, marketable securities, accounts receivable and inventory-with a lifespan of up to one year (short-term) that can easily be converted into cash without loss in value in times of emergency; and (ii) fixed assets -such as property, plant and equipment-for long-term (more than one accounting period) use with relatively low market liquidity (Birch, 2016;Downes and Goodman, 2003). Apart from tangible assets, a firm may have some other long-term operational assets lacking physical substance in contrast to tangible assets. These assets such as patents, copyrights, franchises, licenses, goodwill, software, and trademarks are called as intangible assets and provide firm specific rights. As the market values of these intangible assets are very subjective and difficult to determine, they are usually not shown on the balance sheet until emergence of a situation that requires the value objectively, such as the purchase or sale of an intangible asset. Over about the last three decades and -especially-since the onset of the global financial crisis of 2007-08, (asset) investment patterns of firms all over the world have more or less tended to shift from tangible to intangible asset investments [Lei, Qiu and Wan (2018) for United States; Thum-Thysen, Voigt, Bilbao-Osorio and Maier (2019) for European Union countries; and Corrado, Haskel, Jona-Lasinio and Iommi (2016) for Organisation for Economic Co-operation and Development (OECD) countries]. Data derived from the Central Bank of the Republic of Turkey Company Accounts 2005-2016 also confirms this tendency for Turkish manufacturing firms, though yet the tendency is quite slight. Due to their increasing importance and recognition, intangible assets (i.e. intangible asset investments) have begun to be considered as a critical driver of labor productivity growth (Andrews and Criscuolo, 2013;Andrews and de Serres, 2012;Haskel and Westlake, 2017) and perceived as source of future growth, and a new and potential mechanism to reverse the productivity slowdown observed in many economies after the global financial crisis of (Andrews, Criscuolo and Pal, 2016. Despite the increasing importance and necessity of intangible assets in worldwide economies, the rationale behind examining the role of finance for intangible assets and the role of these assets on financial performance is subject to considerable debate. Intangible asset investments are riskier compared to tangible asset investments for several reasons. These assets are generally not traded on open markets due to problems such as imperfect property rights and information asymmetries. Besides, classification and more importantly the valuation of intangible asset investments at firm level is relatively complicated and volatile (Demmou, Stefanescu and Arquie, 2019). In case of bankcrupty, the determination of appraised liquidation value of intangible assets also carries uncertainties due to the fact that they tend to be more firm-specific and not easily transferrable. One more distinguishing feature of tangible and intangible assets derives from the use of these assets as collateral in corporate borrowing (Bae and Goyal, 2009;Berger and Udell, 1990;Black, de Meza and Jeffreys, 1996;Lei et al., 2018) for data on collateral used in selected countries). Given their characteristics, as compared to tangible assets, it is generally more difficult to finance intangible assets externally; because, due to asymmetry of information in valuation, external investors cannot generally evaluate the feasibility, riskiness and return of the innovative investment projects with required certainty (Himmelberg and Petersen, 1994). Though intangible asset markets have become more liquid over the past three decades according to the Internal Revenue Service (IRS) Reports on Returns of Active Corporations, 1994Corporations, -2005; and more sophisticated measurement methods for determining the value of intangible assets have been developed, the liquidation value of intangible assets is still significantly lower and unforecastable, causing contracting problems in case of default or bankcrupty (Hart and Moore, 1994). However, tangible assets offering high guarantee are pledged as the primary source of collateral in corporate borrowing and play an important role in a firm's access to external finance, due to their relatively low asymmetry of information in valuation and high recovery rates (Liberti and Sturgess, 2016;Shleifer and Vishny, 1992). In the presence of frictions such as contract incompleteness and limited enforceability, creditors favor tangible assets -though they generally lose value when sold under financial distress (Acharya, Bharath and Srinivasan, 2007), because they can be more easily liquidated in case of bankcrupty (Holmstrom and Tirole, 1997). Besides, the degree of overall asset tangibility is accepted as an indicator of upper bound on a firm's total debt capacity (Dietrich, 2007). As a consequence, a firm with relatively high asset tangibility generally has tendency to lower external financing costs and precautionary savings (Lyandres and Palazzo, 2016); while the one having relatively fewer tangible assets is more likely to face difficulties in raising external capital and be financially constrained missing investment opportunities (Almeida and Campello, 2007). This paper aims to extend the research on asset tangibility and financial performance nexus providing advanced empirical evidence on a time series data of Turkish manufacturing sector [consisting of Borsa Istanbul (BIST) listed manufacturing firms] for the period of 1990.Q3-2016.Q4. In the following sections, existing literature is summarized and, methodology and empirical results are presented. Finally, the paper is concluded by discussing empirical findings, presenting the limitations of the study and suggesting for further studies. Literature Review Broadly speaking, asset tangibility and financial performance nexus can be discussed by the pecking order theory first suggested by Donaldson (1961) and later modified by Myers (1984), and Myers and Majluf (1984); and the trade-off theory introduced by Kraus and Litzenberger (1973). The pecking order theory contradicts the existence of financial targets stating that firm follows a financing hierarchy. This hierarchy aims to minimize costs due to information asymmetry, and starts with internal resources and continues with external resources. Related to external financing, debt issuance is favored compared to equity issuance due to the lower information cost of debt and adverse selection effect of equity issuance . The pecking order theory considers the effect of tangible assets on capital structure over debt issuance, as these assets can be used as collateral for debt financing. Findings of Almeida and Campello (2007), Campello and Giambona (2013), and Koralun-Bereznicka (2013) confirm that asset redeployability as a determinant of capital structure positively affects access torelatively less costly-debt financing without forcing the firm to issue equity, minimizing overall financing costs and leading higher financial performance. The trade-off theory predicts the existence of an optimal capital structure of debt and equity (a target debt ratio), where debt tax shields are maximized and bankruptcy costs associated with the debt are minimized. According to Myers (2001), this type of debt financing offers firm a tax shield, as interest accrued on debt can be tax deductible, causing the actual cost of the borrowing to be less than the stated rate of interest. In their own words, Modigliani and Miller (1963) states this advantage of tax shield as; "This means, among other things, that the tax advantages of debt financing are somewhat greater than we originally suggested". In this sense, not exactly the same but as similar to the pecking order theory, trade-off theory also allows firm increase the level of debt financing to gain maximum advantage of tax shield considering increasing riskiness of a possible bankruptcy. There exists a vast literature focusing on the effect of tangible assets on financial performance. In theory, it can reasonably be expected that a firm with high level of highly liquid assets and tangible assets with high-collateral value is likely to use trade credit (Lu- Andrews and Yu-Thompson, 2015). The liquidation advantage of these assets enables the firm to use trade credit less costly than bank loans. Thus, such a firm is likely to suffer less financial distress, compared to a firm with relatively high level of intangible assets. Tangibility, here, serves as the catalyst leading to reduction in financial distress and improving financial performance. Empirical studies on the subject offer mixed findings. The empirical findings of Mehari and Aemiro (2013) and Birhan (2017) on insurance companies in Ethiopia confirm statistically significant and positive effect of asset tangibility on financial performance. Besides, findings of Reyhani (2012) and Azadi (2013)'s studies on Tehran Stock Exchange listed manufacturing firms; Dong, Charles and Cai's (2012), Olatunji and Tajudeen's (2014) and Khan, Shamim and Goyal's (2018) papers on Chinese corporates, Nigerian commercial banks and National Stock Exchange of India Ltd. (NSE India) listed telecommunication companies, respectively, and Korkmaz and Karaca's (2014) and Kocaman, Altemur, Aldemir and Karaca's (2016) works on manufacturing firms in Turkey also confirm these empirical findings and fit with predictions of the theory on tangible assets and financial performance relationship. However, some empirical findings contradictory to the theory -though outnumbered-also exist. According to the empirical findings of the studies of Eric, Samuel and Victor (2013) on insurance companies in Ghana, Pratheepan (2014) on Colombo Stock Exchange listed manufacturing companies in Sri Lanka, and Vintila and Nenu (2015) on Bucharest Stock Exchange listed firms in Romania, a statistically significant and negative relationship has been confirmed between asset tangibility and financial performance. Lastly, Kotsina and Hazak (2012), Okwo, Okelue and Nweze (2012) and Derbali (2014) have analyzed asset tangibility and financial performance nexus, and have not been able to observe any statistically significant relationship. These mixed empirical findings may be due to the differences in research samples and econometric methodologies used, and in selecting financial performance measures. Data, Model and Methodology This paper aims to analyze the effect of asset tangibility on financial performance by empirical tests including (i) Augmented Dickey-Fuller (ADF, 1981), Kwiatkowski-Phillips-Schmidt-Shin (KPSS, 1992), and Zivot and Andrews (ZA, 1992) unit root tests; (ii) one-break Gregory-Hansen (1996) cointegration test; and (iii) Stock and Watson (1993)'s dynamic ordinary least squares (DOLS) methodology. Before proceeding; sampling, and variable construction, descriptive statistics and research model are reported. Sampling The sample consists of Turkish manufacturing sector with 18 main sectors and 30 sub-sectors from 1990.Q3 to 2016.Q4. The reason to focus on this time window is that the aim of the study is to reveal the effect of asset tangibility on financial performance, by also simultaneous considering the effects of other different firm characteristics such as financial leverage, liquidity and operating efficiency (as control variables); and data integrity available through The Central Bank of the Republic of Turkey (CBRT) web site is limited to the mentioned time window. Variable Construction, Descriptive Statistics and Research Model The left-hand-side variable of the research model is return on assets, a very common profitability measure used as a proxy for financial performance (Charitou, Elfani and Lous, 2010;Falope and Ajilore, 2009;Jose, Lancaster and Stevens, 1996;Saravanan, Sivasankaran, Srikanth and Shaw, 2017;Shin and Soenen, 1998;Şamiloğlu and Demirgüneş, 2008). Profitability is used as a common proxy for financial performance, as it evaluates the efficiency regarding with that tangible assets and net current assets are transformed into profits. Though there are other various financial performance measures such as return on equity which reports the profits earned by the firm for its shareholders (Jose et al., 1996;Wang, 2002); return on invested capital which reports the profits earned by the firm on long-term invested capital (Mohamad and Saad, 2010;Nobanee, Abdullatif and AlHajjar, 2011); gross operating income as the ratio of gross operating profits to total assets (Banos-Caballero, Garcia-Teruel and Martinez-Solano, 2012;Deloof, 2003;Dong and Su, 2010) and net operating income as the ratio of net operating profits to total assets (Deloof, 2003), return on assets has some distinctive advantages. It evaluates the efficiency of firm managers in using the firm's real investments and financial resources to generate income (Rivard and Thomas, 1997) and unsimilar to other measures, may be used as a proxy for financial performance in all industries, including manufacturing, financial and non-financial companies (Devi and Devi, 2014;Nunes, Serrasqueiro and Sequeira, 2009). The main focus of the research model is on asset tangibility. Asset tangibility is mostly defined by two measures in the finance literature as collateral value and tangible assets. Titman and Wessels (1988), and Chang et al. (2009) use the ratio of the inventory plus the gross plant and equipment to total assets as a proxy of collateral value in their studies on the determinants of capital structure. On the other hand, tangible assets, also known as fixed assets [or property, plant and equipment (PP&E)], is a term used in accounting for assets and property with finite monetary value and usually a physical form that cannot easily be converted into cash (Dyckman, 1992), and that are not directly sold to a firm's consumers and/or end-users. Therefore, collateral value measure of Titman and Wessels (1988), and Chang, Lee and Lee (2009) may not be as appropriate as tangible assets to refer asset tangibility, as it also covers inventory as a current asset. According to Rajan and Zingales (1995), tangible asset intensive firms can reduce agency costs of debt due to ease of collateralization of these assets, and reduced agency costs of debt will result in higher financial performance. There are typically three measures financial leverage as short-term debt to total assets, longterm debt to total assets and total debt to total assets in finance literature. Empirical findings on the effect of leverage on financial performance have yielded mixed results. According to the studies of Stulz (1990), Ang, Cole and Lin (2000), Abor (2007), Güngöraydınoğlu and Öztekin (2011) and Degryse, Goeji and Kappert (2012), there is a positive relationship between financial leverage and financial performance. This can be explained by disciplinarian impact of higher financial leverage on firm managers' cash flow waste (Grossman and Hart, 1982), as higher pressure of leverage (debtholders) on firm may result in increasing financial performance to generate more cash to pay for debt. In contrast, some other empirical studies (Dawar, 2014;De Jong, Kabir and Nguyen, 2008;Hall, Hutchinson and Michaelas, 2004;Mateev, Poutziouris and Ivanov, 2013;McConnell and Servaes, 1995) conclude that financial leverage is negatively related to financial performance. Higher leverage may increase interest payments resulting in less cash availability. Besides, higher leverage may also result in relative increase in interest rates and in need for more collateral (tangible assets), and this may reduce financial performance causing decrease in cash flows. Liquidity management -as another possible determinant of financial performance-can be linked to working capital management, as most of the measures used to evaluate liquidity are derived from the components of working capital . Thus, literature on the liquidity measures evaluating the effect of liquidity on financial performance is two-fold. While some studies focus on cash conversion cycle; others focus on fundamental liquidity ratios such as current, acid-test and cash ratios. Cash conversion cycle reflects only the operational side of the firm concentrating on accounts receivables, accounts payables and inventories However, liquidity ratios, as more comprehensive measures of corporate liquidity, capture financial aspects of a firm covering current assets and current liabilities (Mun and Jang, 2015). Regarding the empirical findings; Sur, Biswas and Ganguly (2001), Bardia (2004), Eljelly (2004), Narware (2004) and Saldanlı (2012) conclude that current ratio as a proxy for corporate liquidity has statistically significant and negative effect on financial performance. On the contrary, findings of Ghosh and Maji (2003), Muhammad, Jan and Ullah (2012), Ehiedu (2014) and Rehman, Khan and Khokhar (2015) confirm the positive effect of current ratio on financial performance. Lastly, accounts receivable turnover ratio as proxy for operating efficiency is included in the model. This ratio is an accounting measure used to quantify a company's effectiveness in collecting its receivables or money owed by clients. Relatively low/high accounts receivable turnover ratios are consequences of firm's liberal/conservative credit policies. Though liberal credit policy has advantages that it stimulates firm's sales due to availability of longer repayment period to customers for product assessment and ease of access to finance for product acquisition; conservative credit policy has a sideeffect of holding excessive working capital investment that eventually lowers return on assets and increases overall risk by destroying firm value (Deloof, 2003). Empirical literature fits for the reflections of these policies. The findings of Dong and Su (2010), Lyngstadaas and Berg (2016), and Shrivastava, Kumar and Kumar (2017) conclude that there is statistically significant and negative relationship between accounts receivable turnover ratio and financial performance; while findings of Ramachandran and Janakiraman (2009), Sharma and Kumar (2011) and Abuzayed (2012) confirm the opposite. Definitions of the variables included in the research model, descriptive statistics and correlation matrix for the data are given in Table 1. Return on assets (ROA) as a proxy for financial performance is the ratio of net income to total assets. Tangible assets (TAN) as a proxy for asset tangibility is the ratio of tangible assets to total assets. Total debt ratio (LEV) as a proxy for financial leverage is the ratio of total debt (short-and long-term debt) to total assets. Current ratio (LIQ) as a proxy for corporate liquidity is the ratio of current assets to current liabilities. Accounts receivable turnover ratio (OEF) as a proxy for operating efficiency is the ratio of net credit sales to average accounts receivable. It is important to check the correlation between the independent variables to avoid multicollinearity that refers to the extent to which independent variables are correlated. This should be done before developing the research model. As seen in Table 1, the analysis confirms the non-existence of multicollinearity among the variables, as all Pearson correlation coefficient values are less than the cut-off point of 0.60. The regression equation to test relationships between the variables discussed above is as: Unit Root Tests In order to avoid the spurious regression problem, the order of integration of the variables should be investigated. This study uses the ADF (1981), KPSS (1992), and ZA (1992) unit root tests for detecting the presence of a unit root. ADF test is derived from Dickey-Fuller (hereafter, DF, 1979) test. As one of the mostly used unit root tests, DF test is based on the model of the first-order autoregressive process (Box and Jenkins, 1970). The test statistic of the ADF test is calculated as: The problem of this test is the choice of lags . Schwert (1989) suggests that the maximum lag is = 12( 100 ) 1/4 , because if is too low, the test will be affected by autocorrelation and if is too large, the power of test will be lower (Arltova and Fedorova, 2016). The limiting distribution of test statistics is identical with the distribution of DF test statistics and for → ∞ is tabulated in Dickey (1976) and MacKinnon (1991). The unit root tests suggested by DF (1979) and ADF (1981), Phillips and Perron (1988), Elliot, Rothenberg and Stock (1996), Ng and Perron (1995), Ng and Perron (2001) test the null hypothesis that the time series is integrated of order one, I(1). However, KPSS (1992) test proposes that the time series is stationary around a deterministic trend and is calculated as the sum of a deterministic trend, random walk and stationary random error. KPSS test is based on LM test of the hypothesis that the random walk has a zero variance, i.e. 0 : 2 = 0 meaning that is a constant, against the alternative 1 : 2 > 0. The test statistics is as: where = ∑̂, =1 = 1, 2, … , , and ̂2 is the estimate of variance ̂2 of process from Equation (7). Critical values are derived by a simulation and listed in KPSS (1992). Major events like economic crises, catastrophes, terrorism and pandemics may have influences on the data analyzed, as they tend to create structural break(s) in the series. Traditional unit root tests such as ADF and KPSS towards finding evidence of non-stationarity in time series analysis may sometimes be biased due to presence of these structural breaks. Therefore, in such cases advanced unit root tests allowing for the presence of structural break(s) are required. These tests prevent the obtained results from being biased towards nonstationarity and unit root and can identify the possible break date (Glynn, Perara and Verma, 2007). Therefore, Zivot and Andrews (1992) unit root test with endogenous structural break is also performed to avoid obtaining biased results. ZA test, also known as a sequential trend break model, is a variation of Perron (1989)'s original test with their assumption that the exact timing of the structural break point is not known. Instead, they develop a data dependent algorithm as a proxy for Perron (1989)'s subjective procedure to determine the break points. The main difference between these two models is that Perron (1992)'s model is a predetermined break, while ZA is an estimated break. Zivot and Andrews (1992) proceed with three models to test for a unit root: (i) Model A permitting a one-time change in the level of the series; (ii) Model B allowing for a one-time change in the slope of the trend function, and (iii) Model C combining one-time changes in the level and the slope of the trend function of the series. The regression equations corresponding to these models are as: where is an indicator dummy variable for a mean shift occurring at each possible break date (TB), while is variable for corresponding trend shift. The null hypothesis for all models is = 0, implying that the series ( ) contains a unit root with a drift that excludes any structural break, while the alternative hypothesis < 0 is that the series is a trend-stationary process with a one-time break occurring at an unknown point of time. Cointegration Test The study employs one-break Gregory and Hansen (1996) cointegration test to detect structural break in the cointegrating relationship among variables. This cointegration test can be regarded as an extension of Engle and Granger (1987) approach and it involves testing the null hypothesis of no cointegration against an alternative of cointegration with a single regime shift (structural break) in an unknown date based on extensions of the traditional , ∝ and test types (Doguwa et al., 2014). Gregory and Hansen (1996) propose three models of (i) level shift (C), (ii) level shift with trend (C/T) and (iii) intercept with slope shifts (C/S) to test for cointegration with structural breaks as adopted to research model of the study: = 1 + 2 + 1 + 2 + 3 + 4 + ( ) (7) where 1 and 2 represent the intercept before and after the shift, respectively. where is the coefficient of the trend term, t. = 1 + 2 + 1 + 11 + 2 + 22 + 3 + 33 where 1 , 2 , 3 and 4 denote the cointegrating slope coefficients before the regime shift and 11 , 22 , 33 and 44 denote the change in slope coefficients. Gregory and Hansen (1996)'s test is proposed for testing the cointegration in situations with an unknown break date. Therefore, it requires computing the common statistics (ADF and Phillips test statistics) for all possible break points ( ) and then selecting the smallest values to determine the most appropriate break dates (Narayan, 2007). This procedure of selecting small values of test statistics potentially constitutes evidence against the null hypothesis of no cointegration. Formulations of ADF ( * ) and Phillips test statistics ( * and ∝ * ) are as (Gregory and Hansen, 1996) The critical values calculated can be obtained from Table 1 in Gregory and Hansen (1996). In the case that calculated test statistics are greater than the critical values, there exists a cointegration relationship among the series, rejecting the null hypothesis of no cointegration. Estimation of Long-Run Coefficients After determining the cointegration relationship, the next step is to estimate the long run cointegration coefficients that explain the relationships among the series. This study employs DOLS methodology of Stock and Watson (1993). This methodology is improved on ordinary least squares (OLS) and has certain advantages over both OLS and the maximum likelihood procedures, as it copes with small sample and dynamic sources of bias. As a robust single equation approach, DOLS corrects for endogeneity by the inclusion of leads and lags of the first differences of the regressors, and for serially correlated errors by a generalized least squares procedure (Esteve and Requena, 2006). The DOLS estimator is obtained as: where and represent optimum leads and lags, and the error term, respectively. Results of Unit Root Tests Test results of ADF, KPSS and ZA (with break dates given in parentheses) are presented in Table 2. Test results indicate that series are stationary at their first differences and integrated of order one, I(1). Results of the GH Cointegration Test After the all variables are found to be I(1) by unit root tests, Gregory and Hansen (1996) cointegration test is performed to examine the cointegration relationship. Gregory and Hansen (1996)'s cointegration test results indicating the existence of cointegration relationship among the variables are given in Table 3. Table 3 Cointegration Test Results of Gregory and Hansen (1996) Model Break Date Test Statistics * * ∝ * C/S 2002.Q1 -8.89*** -10.46*** -100.17*** *** implies significance at the 1% level. For C/S Model, critical values for * and * are -6.92, -6.41 and -6.17 at the 1%, 5% and 10% significance levels; while critical values for ∝ * are -90.35, -78.52 and -75.56 at the 1%, 5% and 10% significance levels, respectively [obtained from Gregory and Hansen (1996)]. Results of DOLS Long-Run Estimations Following the testing of cointegration between the variables, long-run coefficients are estimated. The long-run coefficients estimated by DOLS methodology are given in Table 4. Table 4 Long-Run Coefficients Estimated by Stock and Watson (1993) Note: ***, ** and * imply significance at the 1%, 5% and 10% levels, respectively. Empirical findings indicate that according to the coefficients estimated, all the independent variables tangibility (TAN), leverage (LEV), liquidity (LIQ) and operating efficiency (OEF) (as proxies for asset tangibility, financial leverage, corporate liquidity and operating efficiency, respectively) have statistically significant effects on financial performance (as proxied by return on assets) during the sample period. While the effects of asset tangibility, financial leverage, corporate liquidity and operating efficiency on financial performance have been positive till (and including) the break date of 2002.Q1; these effects all have turned to negative following 2002.Q1. This break date can be associated with severe economic difficulties the Turkish economy has been experiencing since 1990s, and especially with 2001 crisis (later to be discussed in Conclusion). Conclusion This study aims to analyze the effect of asset tangibility on financial performance by also simultaneous considering the effects of other different firm characteristics such as financial leverage, liquidity and operating efficiency (as control variables) in Turkish manufacturing sector with 18 main sectors and 30 sub-sectors covering 1990.Q3 to 2016.Q4. The stationarity of series is tested by Augmented Dickey-Fuller (1981), Kwiatkowski-Phillips-Schmidt-Shin (1992), and Zivot and Andrews (1992) unit root tests. Unit root test results indicate that series are stationary at their first differences. The existence of cointegration relationship among the series and the possibility of existence of any structural breaks is detected by one-break Gregory and Hansen (1996) cointegration test, concluding empirical findings on the existence of cointegration relationship among the series and a break date of 2002.Q1. Finally, to estimate the long run cointegration coefficients that explain the relationships among the series, dynamic ordinary least squares methodology of Stock and Watson (1993) is employed. Findings indicate that asset tangibility has statistically significant effects on financial performance during the entire sample period. Besides, financial leverage, corporate liquidity and operating efficiency as control variables have the same effect on financial performance. While the effects of all independent variables on financial performance have been positive till (and including) the break date of 2002.Q1; these effects all have turned to negative following the break date. The structural break date estimated by one-break Gregory and Hansen (1996) cointegration test can be associated with severe economic difficulties the Turkish economy has been experiencing since 1990s. Following 1990s, Turkish economy witnessed dramatic turning points. A currency crisis (crash) emerged in 1994 as consequences of huge public sector borrowing requirements and major policy fallacies in financing the deficits. After five years, in August and November 1999, two destructive earthquakes struck the most industrialized part of Turkey, Kocaeli, causing significant negative effects on economy resulting a 3.4% contraction of gross domain product (GDP) in 1999. While the adverse effects of the earthquake were ongoing, Turkey encountered another crisis on February 19th, 2001 in the form of a virtual raid on foreign currencies. The most devastating effect of 2001 crisis was felt in the domestic banking sector. Along with the collapse of this sector, increase in interest rates and devaluation of the Turkish Lira hit the real sector. Turkish economy contracted by 5.7% in 2001 and GDP level dropped to the level of 1995. However, the contraction was bigger in the manufacturing sector reaching up to 9.4%. In only the first three months of the crisis, 4,146 firms were closed. During 2001 crisis, unemployment rate due to dismissal increased to 16%, compared to 9% in the year 2000 level and investments came to a standstill (Atabek Demirhan and Ercan, 2018). As previously stated, major events like economic crises, catastrophes, terrorism and pandemics may have influences on the data analyzed and tend to create structural breaks, the still continuing adverse effects of 1994, 1999 and 2001 crises can be seen as possible causes of the estimated break date of 2002.Q1. The effect of asset tangibility as proxied by tangible assets (i.e. tangible asset investments) on financial performance have been positive till (and including) the break date. This empirical finding can be associated to Rajan and Zingales (1995)'s evidence that tangible asset intensive firms (such as manufacturing sector firms) can reduce agency costs of debt due to ease of collateralization of these assets, and reduced agency costs of debt will result in higher financial performance. However, this positive contribution of tangible assets to financial performance reversed following the break date. The cause may be related to that during 2001 crisis, especially tangible investments in the manufacturing sector came to a standstill. Besides, higher real interest rates (as a result of abrupt rise in overnight interest rate to 2,000% on 20 February and 4,000% on 21 February 2001) not only led to a significant weakening of domestic demand and growth, but also sharply increased funding costs of investments. Similarly, financial leverage, corporate liquidity and operating efficiency proxied by total debt ratio, current ratio and accounts receivable turnover ratio as control variables, respectively, have the same effect on financial performance. The empirical finding related to financial leverage till (and including) the break date of 2002.Q1 can be explained by findings of Stulz (1990), Ang, Cole and Lin (2000), Abor (2007), Güngöraydınoğlu andÖztekin (2011), andDegryse, Goeij andKappert (2012). The positive effect of financial leverage on financial performance may be result of the disciplinarian impact of higher financial leverage on firm managers' cash flow waste, because higher pressure of leverage (or debtholders) on firm may result in increasing financial performance to generate more cash to pay for debt. The reverse effect observed following the break date can be associated to sharply increasing funding costs as consequence of 2001 crisis. The effect of corporate liquidity on financial performance is two-fold. Firm with more liquid assets has advantage of converting these assets into cash quickly at any point of time to meet its liabilities and has tendency to be relatively profitable when compared to other companies with lower levels of liquid assets. For the sample period till (and including) the break date, this point of view fits for the empirical finding of the study that corporate liquidity has positive effect on financial performance. Findings of Ghosh and Maji (2003), Muhammad et al. (2012), Ehiedu (2014) and Rehman et al. (2015) confirm this finding. However, high liquidity also means that firm has idle funds tied up in current assets, reducing chance of investing in other potential projects with more profitability. Besides, return on liquid assets should be reinvested in relatively short time periods, as their reinvestment risks are relatively high. The empirical finding of the study following the break date supports this opposite view as similar to empirical findings of Sur et al. (2001), Bardia (2004), Eljelly (2004), Narware (2004 and Saldanlı (2012), pointing the negative effect of corporate liquidity on financial performance. The positive effect of accounts receivable turnover ratio on financial performance till (and including) the break date can be a consequence of implementing relatively liberal credit policy. As known, such policy stimulates sales due to availability of longer repayment period to customers for product assessment and ease of access to finance for product acquisition. However, following the break date, due adverse effects of 2001 crisis, manufacturing sector may have been forced to implement conservative credit policy with a side-effect of holding excessive working capital investment that eventually lowers financial performance. Empirical findings of Ramachandran and Janakiraman (2009), Sharma and Kumar (2011), and Abuzayed (2012) is similar to finding of this study for the sample period till (and including) the break date; while empirical findings of Dong and Su (2010), Lyngstadaas and Berg (2016), and Shrivastava et al. (2017) fit to this study's finding following the break date of 2002.Q1. This study is subject to some limitations. The findings of the study cannot be generalized to other sectors, as the sample consists of only the manufacturing sector. Besides, the proxies for both dependent and independent variables referred in the research model can be changed with alternate proxies mentioned in the literature. So, further studies can be conducted considering alternate variables with more enlarged samples consisting of different sectors.
2020-07-23T09:02:55.940Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "1274994c1710616c7b8d706cc072094cf907f30b", "oa_license": null, "oa_url": "https://dergipark.org.tr/en/download/article-file/1211578", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0c5037f1e067d1df87136c3fbbf9d99514d0a7d4", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Philosophy" ] }
258409827
pes2o/s2orc
v3-fos-license
Triple-Band Surface Plasmon Resonance Metamaterial Absorber Based on Open-Ended Prohibited Sign Type Monolayer Graphene This paper introduces a novel metamaterial absorber based on surface plasmon resonance (SPR). The absorber is capable of triple-mode perfect absorption, polarization independence, incident angle insensitivity, tunability, high sensitivity, and a high figure of merit (FOM). The structure of the absorber consists of a sandwiched stack: a top layer of single-layer graphene array with an open-ended prohibited sign type (OPST) pattern, a middle layer of thicker SiO2, and a bottom layer of the gold metal mirror (Au). The simulation of COMSOL software suggests it achieves perfect absorption at frequencies of fI = 4.04 THz, fII = 6.76 THz, and fIII = 9.40 THz, with absorption peaks of 99.404%, 99.353%, and 99.146%, respectively. These three resonant frequencies and corresponding absorption rates can be regulated by controlling the patterned graphene’s geometric parameters or just adjusting the Fermi level (EF). Additionally, when the incident angle changes between 0~50°, the absorption peaks still reach 99% regardless of the kind of polarization. Finally, to test its refractive index sensing performance, this paper calculates the results of the structure under different environments which demonstrate maximum sensitivities in three modes: SI = 0.875 THz/RIU, SII = 1.250 THz/RIU, and SIII = 2.000 THz/RIU. The FOM can reach FOMI = 3.74 RIU−1, FOMII = 6.08 RIU−1, and FOMIII = 9.58 RIU−1. In conclusion, we provide a new approach for designing a tunable multi-band SPR metamaterial absorber with potential applications in photodetectors, active optoelectronic devices, and chemical sensors. Introduction In recent years, a new development has emerged in the field of artificial absorbers-the preparation of perfect metamaterial absorbers (PMAs). This process utilizes surface plasmon resonance (SPR), which involves the absorption of incident electromagnetic waves by electrons on the surface and interface of the metamaterial, thereby reducing the reflection [1,2]. The incident wave couples with the electromagnetic component, surpassing the conventional optical diffraction limit and enhancing the local electromagnetic field, ultimately achieving perfect absorption. PMAs have a wide range of applications in optical sensing, optical stealth, light detection, photothermal conversion, photocatalysis, and many other fields [3][4][5][6][7][8][9][10][11][12]. However, the traditional SPR structure is usually fixed and lacks the ability to dynamically adjust the resonance frequency and absorption rate. In addition, the sensitivity also has a connection with the polarization and incident angle. These factors limit the flexibility of a PMA in practical applications [12][13][14][15][16][17]. Therefore, an appropriate metamaterial must be employed to realize a perfect SPR metamaterial absorber that is tunable and incident angle insensitive. In order to achieve tunability in devices, many researchers have employed graphene [18], vanadium dioxide (VO 2 ) [19], and Dirac semimetals [20]. Among these materials, graphene stands out due to its extremely high conductivity and carrier mobility, which provides a full-spectrum response to terahertz waves and strong surface plasmon resonance [21][22][23]. Moreover, the Fermi level of graphene can be precisely controlled through external voltage [24], while materials such as VO 2 are difficult and expensive to control due to their abrupt phase transitions [25]. Furthermore, graphene exhibits a fast response throughout the entire terahertz range [26], enabling graphene materials to achieve multiple perfect absorption peaks in the terahertz range with improved spacing between resonant frequencies and modulation bandwidth. On the other hand, Dirac semimetal materials are a new type of 3D material [27] that has been studied less compared to graphene. The experimental equipment required for their preparation is limited and demanding, making actual production difficult, and their response to terahertz frequencies is low, making it challenging to achieve absorption rates above 99% and narrow spectral bandwidths between multiple peaks. Based on these characteristics, graphene-based absorbers can achieve perfect absorption at specific resonant frequencies. Therefore, the design of graphene-based absorbers is a popular topic. Among many surface plasmon metamaterial absorbers, narrowband perfect absorbers and their refractive index sensing properties have been a popular topic of research. For instance, in 2017, Chen et al. proposed a single peak absorber with an absorption of 99.51% at 2.71 THz. Later, dual-frequency and wide-band absorption were achieved by simply stacking two layers of different geometric sizes of graphene metasurfaces, resulting in absorption rates of 98.94% at 1.99 THz and 99.1% at THz [28]. Two years later, Yan et al. had a discussion on a tunable single-mode absorber about the influence of changes in the Fermi level concerning resonant frequencies. The maximum absorption rate can reach 99.99% at E F = 0 eV [29]. More recently, in 2022, Zhu et al. designed an absorber that had a single-band and dual-band. The single-band absorption is 99.30% at 16.0 THz, while the dual-band absorption could reach 94.56% at 11.4 THz and 99.11% at 26.2 THz, respectively [30]. However, most studies have focused on only one or two narrowband metamaterial absorbers, and there is little discussion about the ideal narrowband perfect absorbers for multi-band applications. This is due to strict limitations on the design and the difficulty of implementation with simple structures. This paper proposes a novel three-mode metamaterial SPR absorber based on an open-ended prohibited sign type (OPST) patterned graphene. Compared with other structures, the proposed structures possess the novelty of simple graphene patterns, multi-band absorption, stable dielectric material properties, and flexible geometric parameters. The structure comprises a periodic graphene array with an OPST pattern on the top, followed by a thick silica (SiO 2 ) spacer layer and a metallic gold (Au) mirror at the bottom. Using COMSOL Multiphysics simulation software, we were able to calculate and observe perfect absorption rates of 99.404%, 99.353%, and 99.146% at 4.04 THz, 6.76 THz, and 9.40 THz, respectively, independent of polarization. Then, the principle of perfect absorption of the absorber was carefully studied by analyzing the local electric field intensity distribution and mapping the equivalent impedance matching diagrams. Additionally, we demonstrated tunability by changing E F while keeping the structural parameters fixed. To test the sensor performance of the device, we changed the surrounding refractive index to achieve a maximum FOM and sensitivity of 9.58 RIU −1 and 2.000 THz/RIU, respectively. Compared to other graphene SPR metamaterial absorbers, this offers more flexible geometric parameters and sensitive sensing properties; thus, promoting greater diversity in the design of graphene-based metamaterial absorbers and providing new design inspiration. Therefore, we believe that this new type of SPR metamaterial absorber has potential applications in the field of active optoelectronic devices, modulators, refractive index sensors, and detectors. Structure and Design As shown in Figure 1a, the whole periodic structure of the metamaterial absorber consists of a high sensitivity and multi-band open-ended prohibited sign type (OPST) single-layer patterned graphene array with certain geometric parameters. It comprises a sandwich-stacked structure of graphene layer deposited on SiO 2 /Au substrates. We utilized COMSOL Multiphysics software to simulate the physical process of absorption using the finite element method (FEM) [31,32]. The simulation involved a SiO 2 dielectric layer with a refractive index of 1.97, an Au layer with a dielectric constant of 1.00, and a graphene layer considered a homogeneous medium with a thickness of t g = 1 nm [33]. Micromachines 2023, 14, x FOR PEER REVIEW 3 of 15 applications in the field of active optoelectronic devices, modulators, refractive index sensors, and detectors. Structure and Design As shown in Figure 1a, the whole periodic structure of the metamaterial absorber consists of a high sensitivity and multi-band open-ended prohibited sign type (OPST) single-layer patterned graphene array with certain geometric parameters. It comprises a sandwich-stacked structure of graphene layer deposited on SiO2/Au substrates. We utilized COMSOL Multiphysics software to simulate the physical process of absorption using the finite element method (FEM) [31,32]. The simulation involved a SiO2 dielectric layer with a refractive index of 1.97, an Au layer with a dielectric constant of 1.00, and a graphene layer considered a homogeneous medium with a thickness of tg = 1 nm [33]. A 3D diagram of the absorber unit with a structure is shown in Figure 1b with a structure period of P = Px = Py = 4 µm. The thickness of the SiO2 and Au layer is td = 4.6 µm and tm = 1 µm, respectively. In addition, Figure 1c portrays a vertical view of the OPST graphene, which illustrates more details on geometric parameters. The array comprises two parts: a ring with four openings and a cross-like structure. As for the open-ended ring structure, the inner radius is r = 1.4 µm, the outer radius is R = 1.8 µm, w = R − r = 0.4 µm, and the opening width is a = 0.15 µm. The width of the cross-like structure is b = 0.63 µm, and its end curvature is equal to 1/r. In the X and Y directions in COMSOL software, periodic boundary conditions are used. PML (perfect matching layers) are set in the Z direction. The incident frequency is set at 1~10 THz. The total conductivity of graphene is defined as σg = σintra + σinter [34], where σintra represents intra-band conductivity, while σinter is inter-band conductivity. They can be described (simplified) by the Kubo formula [35]: A 3D diagram of the absorber unit with a structure is shown in Figure 1b with a structure period of P = P x = P y = 4 µm. The thickness of the SiO 2 and Au layer is t d = 4.6 µm and t m = 1 µm, respectively. In addition, Figure 1c portrays a vertical view of the OPST graphene, which illustrates more details on geometric parameters. The array comprises two parts: a ring with four openings and a cross-like structure. As for the open-ended ring structure, the inner radius is r = 1.4 µm, the outer radius is R = 1.8 µm, w = R − r = 0.4 µm, and the opening width is a = 0.15 µm. The width of the cross-like structure is b = 0.63 µm, and its end curvature is equal to 1/r. In the X and Y directions in COMSOL software, periodic boundary conditions are used. PML (perfect matching layers) are set in the Z direction. The incident frequency is set at 1~10 THz. The total conductivity of graphene is defined as σ g = σ intra + σ inter [34], where σ intra represents intra-band conductivity, while σ inter is inter-band conductivity. They can be described (simplified) by the Kubo formula [35]: Here, ω is the incident angular frequency, e is the charge of a single electron,h is the reduced Planck constant, k B is the Boltzmann constant, T is the surrounding temperature, E F is the Fermi level of graphene, and τ is the relaxation time of graphene. Due to the Pauli exclusion principle, E F hω in the terahertz band, causing the surface conductivity of graphene to be mainly determined by the intra-band contribution. As a result, the σ inter term can be ignored, simplifying the σ intra to Drude conductivity, and the σ g can be expressed as [36,37]: From Equation (3), we can infer that σ(ω) can be dynamically adjusted by changing E F and τ to get certain resonant frequencies. In terms of A = 1 − R − T, absorption A equals 1 minus reflectance R and transmittance T. Furthermore, it approaches 1 when R and T are small enough [38][39][40]. Au is an excellent conductor, and when electromagnetic waves hit its surface, they rapidly diminish in strength. The skin depth of Au is defined as the distance at which the wave's amplitude has decayed to 1/e of the value at the surface: In this study, the range of f is 10 12~1 0 13 Hz (1~10 THz), with µ = µ 0 = 4π × 10 −7 N/A 2 and σ = 4.56 × 10 7 S/m of Au. Therefore, the skin depth can be calculated as 7.4531 × 10 −2 2.3569 × 10 −2 µm, which is significantly smaller than the thickness set in the simulation of t m = 1 µm, indicating that the thickness of the Au layer is sufficient to block the propagation of electromagnetic waves and ensure T is negligible (T = 0). When SPR occurs, R = 0, indicating that perfect absorption is theoretically achieved. In order to demonstrate the proposed structure an equivalent circuit is utilized, as shown in Figure 2. It provides a clear visualization of how the components of the structure interact and contribute to the overall performance. In this model, the Au layer is considered a short circuit so Z Au is negligible [41]. EF is the Fermi level of graphene, and τ is the relaxation time of graphene. Due to th exclusion principle, EF ≫ ħω in the terahertz band, causing the surface conduct graphene to be mainly determined by the intra-band contribution. As a result, t term can be ignored, simplifying the σintra to Drude conductivity, and the σg can pressed as [36,37]: (3), we can infer that σ(ω) can be dynamically adjusted by ch EF and τ to get certain resonant frequencies. In terms of A =1 − R − T, absorption A 1 minus reflectance R and transmittance T. Furthermore, it approaches 1 when R are small enough [38][39][40]. Au is an excellent conductor, and when electromagnetic hit its surface, they rapidly diminish in strength. The skin depth of Au is defined distance at which the wave's amplitude has decayed to 1/e of the value at the surfa πfμσ ωμσ In this study, the range of f is 10 12~1 0 13 Hz (1~10 THz), with µ = µ0 = 4π × 10 −7 N σ = 4.56 × 10 7 S/m of Au. Therefore, the skin depth can be calculated as 7.4531 × 10 −2 × 10 −2 µm, which is significantly smaller than the thickness set in the simulation o µm, indicating that the thickness of the Au layer is sufficient to block the propaga electromagnetic waves and ensure T is negligible (T = 0). When SPR occurs, R = 0, i ing that perfect absorption is theoretically achieved. In order to demonstrate the proposed structure an equivalent circuit is utili shown in Figure 2. It provides a clear visualization of how the components of the st interact and contribute to the overall performance. In this model, the Au layer is ered a short circuit so ZAu is negligible [41]. In this model, graphene is equivalent to a resistor, an inductor, and a capacit parameters in the figure are expressed as follows: In this model, graphene is equivalent to a resistor, an inductor, and a capacitor. The parameters in the figure are expressed as follows: where the calculation of R g , L g , and C g of graphene can be referenced in [42]. The absorption coefficient of the absorber can be described using reflection efficiency Γ: A = 1 − Γ 2 . Thus, perfect absorption can be achieved when Re(Z in ) = Z 0 (intrinsic impedance in free space), which is also discussed in the next section with effective impedance matching theory. The proposed metamaterial absorber is shown in Figure 3. where the calculation of Rg, Lg, and Cg of graphene can be referenced in [42]. The absorption coefficient of the absorber can be described using reflection efficiency Γ: A = 1 − Γ 2 . Thus, perfect absorption can be achieved when Re(Zin) = Z0 (intrinsic impedance in free space), which is also discussed in the next section with effective impedance matching theory. The proposed metamaterial absorber is shown in Figure 3. In actual fabrication, the first step is to clean a silicon substrate with acetone and isopropanol alcohol. Then, it is dried with high-purity compressed nitrogen. Next, a gold ground plane is coated onto the substrate using electron beam evaporation (a type of physical vapor deposition technique, PVD) [43] at room temperature. A SiO2 layer is then spin-coated onto the gold plane, and its thickness is calibrated with a stylus profilometer (Dektak XT-Bruker, Bilrika, America) to ensure it reaches the desired thickness. The final step is to grow a graphene layer on a copper catalyst using chemical vapor deposition (CVD). Once the graphene layer is grown, photolithography is used to create the OPST pattern. Results and Discussion Figure 4a suggests the total absorption (the black line) of the OPST graphene absorber, which is formed by combining an open-ended ring and a cross-like shape. It can be seen that the OPST graphene absorber exhibits three narrow absorption peaks. Figure 4b,c shows its two sub-structures. Evidently, the total absorption is significantly improved by combining these two types, especially at 6.76 THz. The absorber achieves perfect absorption at frequencies fI = 4.04 THz, fII = 6.76 THz, and fIII = 9.40 THz, with efficiencies of 99.404%, 99.353%, and 99.146%, which is named Mode I-III. The OPST graphene will generate surface plasmon in contact with the incident wave, and wavelengths of the three modes are strongly confined by the graphene. The incident wave coincides with the frequency of the surface free electrons, resulting in SPR [44,45]. This then excites plasma oscillations, resulting in strong absorption of energy and the achievement of perfect absorption. In actual fabrication, the first step is to clean a silicon substrate with acetone and isopropanol alcohol. Then, it is dried with high-purity compressed nitrogen. Next, a gold ground plane is coated onto the substrate using electron beam evaporation (a type of physical vapor deposition technique, PVD) [43] at room temperature. A SiO 2 layer is then spin-coated onto the gold plane, and its thickness is calibrated with a stylus profilometer (Dektak XT-Bruker, Bilrika, America) to ensure it reaches the desired thickness. The final step is to grow a graphene layer on a copper catalyst using chemical vapor deposition (CVD). Once the graphene layer is grown, photolithography is used to create the OPST pattern. Results and Discussion Figure 4a suggests the total absorption (the black line) of the OPST graphene absorber, which is formed by combining an open-ended ring and a cross-like shape. It can be seen that the OPST graphene absorber exhibits three narrow absorption peaks. Figure 4b,c shows its two sub-structures. Evidently, the total absorption is significantly improved by combining these two types, especially at 6.76 THz. The absorber achieves perfect absorption at frequencies f I = 4.04 THz, f II = 6.76 THz, and f III = 9.40 THz, with efficiencies of 99.404%, 99.353%, and 99.146%, which is named Mode I-III. The OPST graphene will generate surface plasmon in contact with the incident wave, and wavelengths of the three modes are strongly confined by the graphene. The incident wave coincides with the frequency of the surface free electrons, resulting in SPR [44,45]. This then excites plasma oscillations, resulting in strong absorption of energy and the achievement of perfect absorption. To obtain clear proof of perfect absorption, we used the effective impedance matching principle of an ideal electromagnetic metamaterial absorber. The corresponding results are shown in Figure 5. It displays the changes between the impedance and the incident frequencies of Mode I-III. This relationship is based on the equivalent impedance theory formula [46]: The equivalent impedance, denoted by Z, is related to the scattering parameters S11 and S21, which correspond to reflectance and transmittance, respectively. According to the equivalent impedance theory formula, perfect absorption occurs when the equivalent impedance is matched with free space impedance Z, leading to a significant decrease in reflection (S11 = 0). This is achieved when the real part (Re(Z)) is close to One while the imaginary part (Im(Z)) is close to Zero. Through a comprehensive analysis of Figures 5 and 4, we can find that these three resonant frequencies of the OPST graphene absorber are indeed perfectly matched, as described by the theory. In order to further study the three modes of the principle of perfect absorption [36] the electric field monitors are set at fI = 4.04 THz, fII = 6.76 THz, and fIII = 9.40 THz, respectively, and we obtained the cross-sectional electric field distribution in the X-Y direction as shown in Figure 6. The intensity values of the electric field are represented by the color bar presented on the right side of the figure, with stronger values indicated by warmer colors (e.g., red). In Figure 6a, the electric field is mainly concentrated at the four openings of the ring. At 6.76 THz in Figure 6b, an electric field is also excited at the straight edge of the cross and the edge on the outermost circle, but weakened at the four openings. In Figure 6c, the electric field is mainly excited at the edge of the four openings, the outermost To obtain clear proof of perfect absorption, we used the effective impedance matching principle of an ideal electromagnetic metamaterial absorber. The corresponding results are shown in Figure 5. It displays the changes between the impedance and the incident frequencies of Mode I-III. This relationship is based on the equivalent impedance theory formula [46]: To obtain clear proof of perfect absorption, we used the effective impedance matching principle of an ideal electromagnetic metamaterial absorber. The corresponding results are shown in Figure 5. It displays the changes between the impedance and the incident frequencies of Mode I-III. This relationship is based on the equivalent impedance theory formula [46]: The equivalent impedance, denoted by Z, is related to the scattering parameters S11 and S21, which correspond to reflectance and transmittance, respectively. According to the equivalent impedance theory formula, perfect absorption occurs when the equivalent impedance is matched with free space impedance Z, leading to a significant decrease in reflection (S11 = 0). This is achieved when the real part (Re(Z)) is close to One while the imaginary part (Im(Z)) is close to Zero. Through a comprehensive analysis of Figures 5 and 4, we can find that these three resonant frequencies of the OPST graphene absorber are indeed perfectly matched, as described by the theory. In order to further study the three modes of the principle of perfect absorption [36] the electric field monitors are set at fI = 4.04 THz, fII = 6.76 THz, and fIII = 9.40 THz, respectively, and we obtained the cross-sectional electric field distribution in the X-Y direction as shown in Figure 6. The intensity values of the electric field are represented by the color bar presented on the right side of the figure, with stronger values indicated by warmer colors (e.g., red). In Figure 6a, the electric field is mainly concentrated at the four openings of the ring. At 6.76 THz in Figure 6b, an electric field is also excited at the straight edge of the cross and the edge on the outermost circle, but weakened at the four openings. In Figure 6c, the electric field is mainly excited at the edge of the four openings, the outermost circle, and near the intersection of the cross. These three cases can be attributed to the The equivalent impedance, denoted by Z, is related to the scattering parameters S 11 and S 21 , which correspond to reflectance and transmittance, respectively. According to the equivalent impedance theory formula, perfect absorption occurs when the equivalent impedance is matched with free space impedance Z, leading to a significant decrease in reflection (S 11 = 0). This is achieved when the real part (Re(Z)) is close to One while the imaginary part (Im(Z)) is close to Zero. Through a comprehensive analysis of Figures 4 and 5, we can find that these three resonant frequencies of the OPST graphene absorber are indeed perfectly matched, as described by the theory. In order to further study the three modes of the principle of perfect absorption [36], the electric field monitors are set at f I = 4.04 THz, f II = 6.76 THz, and f III = 9.40 THz, respectively, and we obtained the cross-sectional electric field distribution in the X-Y direction, as shown in Figure 6. The intensity values of the electric field are represented by the color bar presented on the right side of the figure, with stronger values indicated by warmer colors (e.g., red). In Figure 6a, the electric field is mainly concentrated at the four openings of the ring. At 6.76 THz in Figure 6b, an electric field is also excited at the straight edge of the cross and the edge on the outermost circle, but weakened at the four openings. In Figure 6c, the electric field is mainly excited at the edge of the four openings, the outermost circle, and near the intersection of the cross. These three cases can be attributed to the coupling of the vibration frequency of the patterned OPST graphene layer with waves at these three frequencies, providing electric dipole resonance and greatly consuming incident energy [47]. Thus, the absorber achieves a perfect match with the free-space impedance in the three resonance frequency bands. Moreover, the incident waves are perfectly absorbed eventually. coupling of the vibration frequency of the patterned OPST graphene layer with waves at these three frequencies, providing electric dipole resonance and greatly consuming incident energy [47]. Thus, the absorber achieves a perfect match with the free-space impedance in the three resonance frequency bands. Moreover, the incident waves are perfectly absorbed eventually. Furthermore, by using the control variable method, we studied the effect on the absorption by adjusting the geometric parameters [48], the opening width a, the cross-width b, the ring width w, and the inner radius r. The corresponding results are suggested in Figure 7. Figure 7a,c,e,g displays the changes in absorption efficiencies of each resonance mode, and Figure 7b,d,f,h shows the corresponding shift of the resonant frequency. Figure 7a demonstrates that increasing parameter a from 0.05 µm to 0.25 µm enhances the absorption efficiency of Mode I but causes the absorption efficiency of other modes to initially increase and then decrease. This trend is attributed to the fact that parameter a, which refers to the opening width of the ring structure, affects the local electromagnetic fields of the three modes, as shown in Figure 6. Additionally, the resonance frequencies depicted in Figure 7b are blue-shifted, particularly in Mode I, indicating that parameter a mainly influences its resonance frequency. When b varies from 0.53 µm to 0.73 µm, the absorption efficiency of Mode I and Mode III barely changes (relative to Mode II), as demonstrated in Figure 7c. This is because the local electromagnetic field distribution in Mode I and III is almost unaffected by b, as it is not on the straight edge of the cross. The shift in the resonant frequencies of the three modes due to the change in parameter b shown in Figure 7d causes a blue shift, with the most significant change occurring in Mode III. In Figure 7e, changing parameter w does not significantly affect the absorption efficiencies of Mode I and II, but it noticeably impacts the absorption rate and resonance frequency of Mode III, resulting in a significant red shift. This effect can be attributed to the increase in the edge length of the outermost circle due to the increase in w, as R = r + w and C = 2πR. Consequently, the distribution region of the plasma broadens, as evident in Figure 6, and the resonance frequency of Mode III gradually decreases. The same phenomenon can be observed in Figure 7g,h when parameter r (or R) is changed, with Mode I and II also exhibiting a red shift. However, the pattern size of OPST graphene should not be too large, as this can cause it to couple to adjacent graphene arrays and affect the resonance frequency. Furthermore, by using the control variable method, we studied the effect on the absorption by adjusting the geometric parameters [48], the opening width a, the cross-width b, the ring width w, and the inner radius r. The corresponding results are suggested in Figure 7. Figure 7a,c,e,g displays the changes in absorption efficiencies of each resonance mode, and Figure 7b,d,f,h shows the corresponding shift of the resonant frequency. Figure 7a demonstrates that increasing parameter a from 0.05 µm to 0.25 µm enhances the absorption efficiency of Mode I but causes the absorption efficiency of other modes to initially increase and then decrease. This trend is attributed to the fact that parameter a, which refers to the opening width of the ring structure, affects the local electromagnetic fields of the three modes, as shown in Figure 6. Additionally, the resonance frequencies depicted in Figure 7b are blue-shifted, particularly in Mode I, indicating that parameter a mainly influences its resonance frequency. When b varies from 0.53 µm to 0.73 µm, the absorption efficiency of Mode I and Mode III barely changes (relative to Mode II), as demonstrated in Figure 7c. This is because the local electromagnetic field distribution in Mode I and III is almost unaffected by b, as it is not on the straight edge of the cross. The shift in the resonant frequencies of the three modes due to the change in parameter b shown in Figure 7d causes a blue shift, with the most significant change occurring in Mode III. In Figure 7e, changing parameter w does not significantly affect the absorption efficiencies of Mode I and II, but it noticeably impacts the absorption rate and resonance frequency of Mode III, resulting in a significant red shift. This effect can be attributed to the increase in the edge length of the outermost circle due to the increase in w, as R = r + w and C = 2πR. Consequently, the distribution region of the plasma broadens, as evident in Figure 6, and the resonance frequency of Mode III gradually decreases. The same phenomenon can be observed in Figure 7g,h when parameter r (or R) is changed, with Mode I and II also exhibiting a red shift. However, the pattern size of OPST graphene should not be too large, as this can cause it to couple to adjacent graphene arrays and affect the resonance frequency. In addition, we also discussed the determination of the thickness of the dielectric layer as shown in Figure 8. We can find that the thickness of SiO 2 (t d ) has an effect on the absorption of three mods, which is consistent with the expectation of theoretical derivation (Equations (5) and (7)). When t d = 4.6 µm, the absorption of Mode I-III could remain above 98.56% with the highest average value. In addition, we also discussed the determination of the thickness of the di layer as shown in Figure 8. We can find that the thickness of SiO2 (td) has an effec absorption of three mods, which is consistent with the expectation of theoretical In practical applications, the dynamic tuning ability of absorbers is crucial, espe when the structural parameters are fixed. Figure 9 illustrates the change in the abso absorption spectrum when the Fermi level (EF) of graphene is altered while keepin structural parameters constant. EF is given by [49]: where Vf is Fermi velocity, ε0 and εr are the dielectric constant in a vacuum and re dielectric constant, Vg is the applied voltage, which can be controlled by altering th voltage or chemical doping, e0 is the amount of electron charge, and td is the thickn the dielectric layer. As shown in Figure 9a,c, when EF increases from 0.65 eV to 0.85 eV, the reso frequencies of Mode I-III suggest a blue shift. Due to how the resonance frequency the metamaterial absorber is related to its capacitance C, when EF increases, C increa the same time, leading to an increase in the resonance frequency. This phenomeno also be explained in terms of the formula for resonant wavelengths: λres = α + β × nsp, w nsp represents the effective refractive index of graphene, while α and β are coefficient have close connections with the patterned graphene's geometry and the surroundi electric properties [50]. Thus, as EF increases, nsp decreases, causing λres to decrease tually. Consequently, the resonance frequency increases, resulting in a blue shif Mode I, the resonance frequency blue shift is in the range of 3.79~4.29 THz, with the est absorption of 99.99% at EF = 0.70 eV. For Mode II, the resonance frequency blue ranges from 6.30 THz to 7.20 THz, with the highest absorption rate of 99.91% at EF eV. Finally, for Mode III, the resonance frequency blue shift ranges from 8.76 THz THz (the maximum range is limited to 10 THz in this simulation when EF = 0.85 eV the maximum absorption rate is 99.27% at EF = 0.75 eV. The tuning sensitivity of the resonant modes is 2.5 THz/eV, 4.5 THz/eV, and 6.2 THz/eV (up to 10 THz), respect In practical applications, the dynamic tuning ability of absorbers is crucial, especially when the structural parameters are fixed. Figure 9 illustrates the change in the absorber's absorption spectrum when the Fermi level (E F ) of graphene is altered while keeping the structural parameters constant. E F is given by [49]: where V f is Fermi velocity, ε 0 and ε r are the dielectric constant in a vacuum and relative dielectric constant, V g is the applied voltage, which can be controlled by altering the grid voltage or chemical doping, e 0 is the amount of electron charge, and t d is the thickness of the dielectric layer. In practical applications, the dynamic tuning ability of absorbers is crucial, espe when the structural parameters are fixed. Figure 9 illustrates the change in the absor absorption spectrum when the Fermi level (EF) of graphene is altered while keepin structural parameters constant. EF is given by [49]: where Vf is Fermi velocity, ε0 and εr are the dielectric constant in a vacuum and rel dielectric constant, Vg is the applied voltage, which can be controlled by altering the voltage or chemical doping, e0 is the amount of electron charge, and td is the thickne the dielectric layer. As shown in Figure 9a,c, when EF increases from 0.65 eV to 0.85 eV, the reson frequencies of Mode I-III suggest a blue shift. Due to how the resonance frequency the metamaterial absorber is related to its capacitance C, when EF increases, C increas the same time, leading to an increase in the resonance frequency. This phenomenon also be explained in terms of the formula for resonant wavelengths: λres = α + β × nsp, w nsp represents the effective refractive index of graphene, while α and β are coefficients have close connections with the patterned graphene's geometry and the surroundin electric properties [50]. Thus, as EF increases, nsp decreases, causing λres to decrease e tually. Consequently, the resonance frequency increases, resulting in a blue shift Mode I, the resonance frequency blue shift is in the range of 3.79~4.29 THz, with the est absorption of 99.99% at EF = 0.70 eV. For Mode II, the resonance frequency blue ranges from 6.30 THz to 7.20 THz, with the highest absorption rate of 99.91% at EF = eV. Finally, for Mode III, the resonance frequency blue shift ranges from 8.76 THz THz (the maximum range is limited to 10 THz in this simulation when EF = 0.85 eV) the maximum absorption rate is 99.27% at EF = 0.75 eV. The tuning sensitivity of the resonant modes is 2.5 THz/eV, 4.5 THz/eV, and 6.2 THz/eV (up to 10 THz), respecti As shown in Figure 9a,c, when E F increases from 0.65 eV to 0.85 eV, the resonance frequencies of Mode I-III suggest a blue shift. Due to how the resonance frequency ω of the metamaterial absorber is related to its capacitance C, when E F increases, C increases at the same time, leading to an increase in the resonance frequency. This phenomenon can also be explained in terms of the formula for resonant wavelengths: λ res = α + β × n sp , where n sp represents the effective refractive index of graphene, while α and β are coefficients that have close connections with the patterned graphene's geometry and the surrounding dielectric properties [50]. Thus, as E F increases, n sp decreases, causing λ res to decrease eventually. Consequently, the resonance frequency increases, resulting in a blue shift. For Mode I, the resonance frequency blue shift is in the range of 3.79~4.29 THz, with the highest absorption of 99.99% at E F = 0.70 eV. For Mode II, the resonance frequency blue shift ranges from 6.30 THz to 7.20 THz, with the highest absorption rate of 99.91% at E F = 0.80 eV. Finally, for Mode III, the resonance frequency blue shift ranges from 8.76 THz to 10 THz (the maximum range is limited to 10 THz in this simulation when E F = 0.85 eV), and the maximum absorption rate is 99.27% at E F = 0.75 eV. The tuning sensitivity of the three resonant modes is 2.5 THz/eV, 4.5 THz/eV, and 6.2 THz/eV (up to 10 THz), respectively, indicating that the OPST graphene absorber exhibits excellent tunability. In other words, a slight change in E F can cause a significant shift in resonant frequency. Figure 9b demonstrates that the absorption of Mode I-III changes as E F varies. At E F = 0.75 eV, the average absorption of the three peaks reaches a maximum of 99.27%, suggesting that setting E F to 0.75 eV is a more balanced choice. In conclusion, the absorption and resonant frequencies of the OPST graphene absorber can be dynamically adjusted solely by altering E F without modifying its structural parameters, which is highly advantageous in practical applications. We investigated the response characteristics of the OPST graphene absorber to changes in the polarization mode and incident angle of electromagnetic waves. The results are shown in Figure 10. In Figure 10a, we initially assumed that electromagnetic waves were incident vertically on the absorber's surface under two polarizations (TE and TM), that is, when the incident angle is 0 • . The two absorption diagrams are highly consistent, suggesting that the OPST graphene absorber is insensitive to the two polarizations due to the center symmetry of its surface geometry. Next, we gradually increased the incident angle from 0 • to 50 • under TE and TM polarizations, respectively, and we obtained the results shown in Figure 10b,c. It can be seen that the absorption spectra do not change significantly with the increasing incident angle regardless of the polarization mode, demonstrating the incident angle insensitivity of the OPST graphene absorber. Due to the highly symmetric design of the graphene pattern and the fact that the localized surface plasmon resonance (LSPR) wavelength of the graphene nanostructure is smaller than the vacuum wavelength of the incident light [51][52][53], it exhibits strong local surface plasmon resonance when the incident light angle is between 0~50 • while maintaining the symmetry of the structure. However, under TE polarization, the influence of the incident angle on Mode I absorptivity was greater than that of the other two modes, decreasing from 99.18% at 0 • to 91.78% at 50 • . Similarly, Mode III absorption decreased from 99.25% at 0 • to 90.80% at 50 • under TM polarization. This is because, at these two resonant frequencies, the incident area decreases with the increasing incident angle, which easily leads to weakened plasmon resonance intensity and decreased absorption [54,55]. All discussions in this paper are based on TE polarization patterns unless otherwise noted. In summary, the OPST graphene absorber is polarization and incident angle insensitive under 0~50 • . In practical applications, a concave structure can be designed above the absorber's surface to couple with the incident wave at a specific incident angle, achieving perfect absorption. Micromachines 2023, 14, x FOR PEER REVIEW 10 indicating that the OPST graphene absorber exhibits excellent tunability. In other wo a slight change in EF can cause a significant shift in resonant frequency. Figure 9b dem strates that the absorption of Mode I-III changes as EF varies. At EF = 0.75 eV, the ave absorption of the three peaks reaches a maximum of 99.27%, suggesting that setting 0.75 eV is a more balanced choice. In conclusion, the absorption and resonant frequen of the OPST graphene absorber can be dynamically adjusted solely by altering EF wit modifying its structural parameters, which is highly advantageous in practical app tions. We investigated the response characteristics of the OPST graphene absorbe changes in the polarization mode and incident angle of electromagnetic waves. The re are shown in Figure 10. In Figure 10a, we initially assumed that electromagnetic w were incident vertically on the absorber's surface under two polarizations (TE and that is, when the incident angle is 0°. The two absorption diagrams are highly consis suggesting that the OPST graphene absorber is insensitive to the two polarizations du the center symmetry of its surface geometry. Next, we gradually increased the inci angle from 0° to 50° under TE and TM polarizations, respectively, and we obtained results shown in Figure 10b,c. It can be seen that the absorption spectra do not ch significantly with the increasing incident angle regardless of the polarization m demonstrating the incident angle insensitivity of the OPST graphene absorber. Due to highly symmetric design of the graphene pattern and the fact that the localized su plasmon resonance (LSPR) wavelength of the graphene nanostructure is smaller than vacuum wavelength of the incident light [51][52][53], it exhibits strong local surface plas resonance when the incident light angle is between 0~50° while maintaining the symm of the structure. However, under TE polarization, the influence of the incident angl Mode I absorptivity was greater than that of the other two modes, decreasing from 99 at 0° to 91.78% at 50°. Similarly, Mode III absorption decreased from 99.25% at 0° to 90 at 50° under TM polarization. This is because, at these two resonant frequencies, the dent area decreases with the increasing incident angle, which easily leads to weak plasmon resonance intensity and decreased absorption [54,55]. All discussions in thi per are based on TE polarization patterns unless otherwise noted. In summary, the O graphene absorber is polarization and incident angle insensitive under 0~50°. In prac applications, a concave structure can be designed above the absorber's surface to co with the incident wave at a specific incident angle, achieving perfect absorption. Finally, we examined the ambient refractive index n sensing ability of the OPST phene absorber, as presented in Figure 11. As n ranges from 1.00~1.08, the three reso frequencies corresponding Mode I-III exhibit a red shift, with a range of Δf being 3.95~ THz, 6.64~6.74 THz, and 9.20~9.36 THz, respectively. The corresponding sensitivity calculated as [56,57]: Finally, we examined the ambient refractive index n sensing ability of the OPST graphene absorber, as presented in Figure 11. As n ranges from 1.00~1.08, the three resonant frequencies corresponding Mode I-III exhibit a red shift, with a range of ∆f being 3.95~4.02 THz, 6.64~6.74 THz, and 9.20~9.36 THz, respectively. The corresponding sensitivity S is calculated as [56,57]: Here, Δn is 1.08 − 1.00 = 0.08 in this study, and the sensitivity of the three mod represented as SI = 0.875 THz/RIU, SII = 1.250 THz/RIU, and SIII = 2.000 THz/RIU, w RIU refers to refractive index unit. The high-frequency resonance frequency exhib stronger electric field density, making it more responsive to environmental cha Therefore, the sensitivity at high resonant frequencies (e.g., Mode III) is higher, resu in a larger red shift range. Moreover, the absorption of the three modes remains a 97%, even when the ambient refractive index changes. (11) states that the maximum FOM of the three modes is FOMI = 3.74 R FOMII = 6.08 RIU −1 , and FOMIII = 9.58 RIU −1 , respectively. The findings indicate tha absorber possesses higher S and FOM and exhibits dynamic adjustment polarizatio dependence and incident angle insensitivity characteristics, which have superior sen performance and a broad range of potential applications. As listed in Table 1, the prop absorber is compared with previous works. Evidently, the sensitivity in the refractiv dex sensing of our design has been significantly improved. Here, ∆n is 1.08 − 1.00 = 0.08 in this study, and the sensitivity of the three modes is represented as S I = 0.875 THz/RIU, S II = 1.250 THz/RIU, and S III = 2.000 THz/RIU, where RIU refers to refractive index unit. The high-frequency resonance frequency exhibits a stronger electric field density, making it more responsive to environmental changes. Therefore, the sensitivity at high resonant frequencies (e.g., Mode III) is higher, resulting in a larger red shift range. Moreover, the absorption of the three modes remains above 97%, even when the ambient refractive index changes. Figure 12 illustrates the variations in FWHM of the absorption in three modes and the figure of merit (FOM) in three modes. FOM represents the ratio of sensitivity to FWHM [58,59]: represented as SI = 0.875 THz/RIU, SII = 1.250 THz/RIU, and SIII = 2.000 THz/RIU, w RIU refers to refractive index unit. The high-frequency resonance frequency exhib stronger electric field density, making it more responsive to environmental cha Therefore, the sensitivity at high resonant frequencies (e.g., Mode III) is higher, resu in a larger red shift range. Moreover, the absorption of the three modes remains a 97%, even when the ambient refractive index changes. (11) states that the maximum FOM of the three modes is FOMI = 3.74 R FOMII = 6.08 RIU −1 , and FOMIII = 9.58 RIU −1 , respectively. The findings indicate tha absorber possesses higher S and FOM and exhibits dynamic adjustment polarizatio dependence and incident angle insensitivity characteristics, which have superior sen performance and a broad range of potential applications. As listed in Table 1, the prop absorber is compared with previous works. Evidently, the sensitivity in the refractiv dex sensing of our design has been significantly improved. Equation (11) states that the maximum FOM of the three modes is FOM I = 3.74 RIU −1 , FOM II = 6.08 RIU −1 , and FOM III = 9.58 RIU −1 , respectively. The findings indicate that the absorber possesses higher S and FOM and exhibits dynamic adjustment polarization independence and incident angle insensitivity characteristics, which have superior sensing performance and a broad range of potential applications. As listed in Table 1, the proposed absorber is compared with previous works. Evidently, the sensitivity in the refractive index sensing of our design has been significantly improved. Conclusions In summary, we designed a novel SPR metamaterial absorber that offers triple-mode perfect absorption, tunability, polarization independence, incident angle insensitivity, high sensitivity, and high FOM. It is a sandwich-stacked structure with a monolayer graphene periodic array featuring an open-ended prohibited sign type (OPST) pattern on the top, a thicker layer of SiO 2 in the middle, and a Au mirror at the bottom. Compared with other structures, the proposed structures possess the novelty of simple graphene patterns, stable dielectric material properties, flexible geometric parameters, multi-band absorption, and high sensitivity. Based on the study in the COMSOL Multiphysics software, we conclude that the absorber achieves perfect absorption at f I = 4.04 THz, f II = 6.76 THz, and f III = 9.40 THz, respectively, and exhibits angle insensitivity when the incident angle is less than 50 • . Therefore, the absorber has potential application value in active optoelectronic devices. In this study, we explore the mechanism behind achieving perfect absorption in 1~10 THz using the effective impedance matching principle. By mapping the local electric field distribution in the X-Y plane, we gain a deeper understanding of the absorber's operation. Furthermore, the effect of varying the geometrical parameters of the absorber and the Fermi level of graphene on the resonant frequency and absorption peak of the three resonant modes of the absorber is investigated. In addition, we examined the sensor performance by adjusting the ambient refractive index. The results suggest the maximum S and FOM can reach 2.00 THz/RIU and 9.58 RIU −1 , respectively. These findings suggest that the proposed device has significant potential for use in photoelectric detection, chemical sensing, and related fields.
2023-04-30T15:22:37.986Z
2023-04-27T00:00:00.000
{ "year": 2023, "sha1": "7c9afd9d691abe7bff0faa9437070a2499402439", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/mi14050953", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "705ad0dc0c1b6cb0194e491843964c79323d173e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
157058301
pes2o/s2orc
v3-fos-license
Validation of CyTOF Against Flow Cytometry for Immunological Studies and Monitoring of Human Cancer Clinical Trials Flow cytometry is a widely applied approach for exploratory immune profiling and biomarker discovery in cancer and other diseases. However, flow cytometry is limited by the number of parameters that can be simultaneously analyzed, severely restricting its utility. Recently, the advent of mass cytometry (CyTOF) has enabled high dimensional and unbiased examination of the immune system, allowing simultaneous interrogation of a large number of parameters. This is important for deep interrogation of immune responses and particularly when sample sizes are limited (such as in tumors). Our goal was to compare the accuracy and reproducibility of CyTOF against flow cytometry as a reliable analytic tool for human PBMC and tumor tissues for cancer clinical trials. We developed a 40+ parameter CyTOF panel and demonstrate that compared to flow cytometry, CyTOF yields analogous quantification of cell lineages in conjunction with markers of cell differentiation, function, activation, and exhaustion for use with fresh and viably frozen PBMC or tumor tissues. Further, we provide a protocol that enables reliable quantification by CyTOF down to low numbers of input human cells, an approach that is particularly important when cell numbers are limiting. Thus, we validate CyTOF as an accurate approach to perform high dimensional analysis in human tumor tissue and to utilize low cell numbers for subsequent immunologic studies and cancer clinical trials. INTRODUCTION To discover immune correlates and biomarkers of disease requires global profiling of the immune system, the proteins differentially regulated by therapy and how these relate to disease outcome. Highly focused exploration may provide hypothesis-driven insight, but often the paradigm-altering discoveries come from unbiased global immune profiling. Flow cytometry (FC) has emerged as a key tool to profile multiple parameters of the immune system, including vital functional and exhaustion markers associated with the quality of the immune response (1). However, FC is limited by the number of parameters that can be analyzed at one time (generally 12 per staining panel). This means that stains must be broken up into groups with redundancy of many of the cell lineage markers in different stains. As a result, FC requires large sample sizes for coverage of diverse immune subsets. This is particularly detrimental for tumor biopsies where sample sizes are often limiting and the broad array of FC staining panels often cannot be performed. Further, when only a few markers can be analyzed in a single sample, researchers must design panels using a priori knowledge of marker expression patterns to characterize cells of interest. If unusual marker expression patterns are encountered, as is often the case in disease states, follow-up studies that require time-consuming design and optimization of new panels must be performed, assuming more patient sample is available. Recently, the use of high-dimensional time-of-flight mass cytometry (CyTOF) to identify 40+ parameters simultaneously has emerged as a technique for broad-scale immune profiling and biomarker discovery (1)(2)(3)(4)(5)(6)(7). Because CyTOF allows far more markers to be measured in a single tube, fewer cells are required per experiment than would be needed for traditional FC, which would require multiple tubes (with different antibody panels) to cover the same number of markers. By incorporating a large number of parameters into single stains, CyTOF enables acquisition of large amounts of immunologic data from limited sample sizes to better understand biologic systems, response to therapy (8) and signatures of disease (1,(5)(6)(7)(9)(10)(11)(12)(13)(14). Examples include characterization of intra-and inter-tumor leukemia heterogeneity that correlates with clinical outcomes (15) as well as dissections of T and NK cell subtypes with high resolution (16)(17)(18)(19), antiviral T cell responses (5,7,18,20), and immune cell signatures linked to recovery from surgery (21). Thus, CyTOF has enormous potential to discover disease associated immunologic changes in cancer, identify functional changes to guide subsequent therapy and ultimately predict therapeutic outcomes. Both FC and CyTOF utilize antibodies to label targets on cells. For FC these antibodies are labeled with fluorophores that are excited by lasers to emit light subsequently detected by the flow cytometer. Due to the range of wavelengths of these light emissions, there is overlap in their emission spectra that must be mathematically compensated, thus limiting the number of fluorophores that can be used simultaneously. CyTOF uses antibodies conjugated to rare heavy metal isotopes that are not normally present in biological specimens. As opposed to fluorescence, CyTOF uses an atomic mass cytometer to detect the time-of-flight (TOF) of each metal. Each atom's TOF is determined by its mass, allowing the composition of metal atoms on each cell to be ascertained. Detection overlap among heavy metal isotopes is generally limited to <2% (22) rather than the 5-100% spectral overlap seen in conventional FC, and backgrounds are very low because cells do not naturally contain heavy metals. Thus, the detection of low-expression markers is greatly enhanced even on cell populations such as myeloid cells with high auto-fluorescence. The goal of this study is to validate CyTOF against FC for use in immune profiling for clinical trials. Panels were designed to include major (and most minor) immune lineage defining markers in combination with a wide array of functional, activation, exhaustion, differentiation, chemotaxis, immunomodulatory, and senescence markers (Table 1). Overall, our results demonstrate that CyTOF faithfully recapitulates FC data in PBMC and tumor tissues, providing reliable staining of >35 parameters for high dimensional analyses for analysis of cancer clinical trials. PBMC and Tumor Tissue Collection All human tissues and blood were obtained through protocols approved by the institutional review board. Written informed consent was obtained from all donors. Peripheral blood samples were collected from 11 healthy donors into sterile anticoagulantcoated tubes from the Healthy Donor Blood Collection Study at the Princess Margret Cancer Center (IRB#11-0343). Five surgically resected tumor specimens; 2 ovarian (IRB#10-0335), 2 melanoma (IRB#05-0495), and 1 breast tumor (IRB#06-0801) were obtained from the UHN Biospecimen Program. Sample Processing Peripheral blood mononuclear cells (PBMCs) were isolated by Ficoll-paque density gradient centrifugation from the healthy donor's blood. After isolation, cells were directly stained for flow and mass cytometry. Excess cells were aliquoted in 10 7 cells per vial in freezing media (10% DMSO in heat-inactivated FBS) and cryopreserved in liquid nitrogen. Tissue samples were minced into 2-4 mm 3 fragments and digested enzymatically into single cell suspensions with the gentleMACS Dissociator (Miltenyi Biotech, catalog #130-093-235) and the human tumor dissociation kit (Miltenyi Biotech, catalog #130-095-929) to obtain single cell preparations. Cells were then aliquoted and cryopreserved in liquid nitrogen. CyTOF and Flow Cytometry Antibodies The same antibody clones were used for CyTOF and FC. The vendor from which each antibody was purchased is listed in Tables 1, 2. For CyTOF, purified unconjugated antibodies used were Biolegend MaxPar Ready antibodies or custom-made with no additional protein carrier from Biolegend or Thermo Fisher. CyTOF antibodies were labeled with metal-tag at the SickKids-UHN Flow and Mass Cytometry Facility using the MaxPar Antibody Labeling kit from Fluidigm (catalog #201300). Staining Procedure After PBMCs isolation, cells were counted and viability measured by trypan blue exclusion. One million viable cells were aliquoted into 4 ml polystyrene V-bottom tubes for CyTOF staining. For FC staining, 1 million viable cells per well were added to 8 wells of into 96-well plate for the FC panels shown in Table 2. CyTOF and FC staining were performed simultaneously. Single cells suspensions from tumor tissues were handled analogously same way after thawing. For FC staining, cells were incubated in Fc blocker (ThermoFisher, catalog #16-9161-73) for 10 min at room temperature, followed by incubation in the surface markers antibody cocktail for 30 min at 4 • C. Cells were then fixed with × g for 3 min in phosphate buffer saline (PBS) to be ready for FC acquisition. For CyTOF staining, cells were Fc blocked as for FC staining, followed by incubation with surface marker staining cocktail for 30 min at 4 • C. For viability staining, cells were washed with PBS and incubated for 5 min in room temperature in 200 µl of 1 µM cisplatin solution (BioVision, catalog #1550-1000). Cisplatin was quenched by adding 2 ml of 5% serumcontaining PBS. Cells were fixed and permeabilized immediately in eBioscience Foxp3/Transcription Factor Staining Buffer Set, followed by incubation in the intracellular markers antibody cocktail for 30 min at 4 • C. EQ Four Element Calibration Beads (Fluidigm) were used to normalize signal intensity over time. For iridium labeling of cellular DNA, cells were suspended in 1 ml of 100 nM of iridium (Fluidigm, Catalog #201192B) in PBS containing 0.3% saponin and 1.6% formaldehyde for 1 h at 4 • C. Cells were then washed and kept in PBS with 1.6% formaldehyde in 4 • C for 1 to 4 days before acquisition. Data Acquisition and Analysis Cells stained for FC were acquired on the day of or the day after staining using a 5-laser LSR Fortessa X-20 (BD) at the Flow Cytometry Core Facility at Princess Margaret Cancer Center. Single stain controls for each fluorochrome were prepared using UltraComp eBead Compensation Beads (ThermoFisher, catalog #01-2222-42). Data were analyzed using FlowJo V10. For CyTOF data acquisition, cells were pelleted in Milli-Q water on the day of acquisition and transferred on ice to SickKids-UHN Flow and Mass cytometry Facility to be acquired on third-generation Helios mass cytometer (Fluidigm). Cells were then resuspended into 1 ml of EQ beads diluted 1:10 in Maxpar Cell Acquisition Solution and filtered through cell strainer cap tubes. Cells were acquired at rate of 100-250 events per second. Acquired raw FCS files were normalized with the preloaded normalizer algorithm on CyTOF software version 6.7. Normalized CyTOF FCS files were analyzed using Cytobank 6.2 (Cytobank, Inc) to manually gate different populations and create 2 dimensions and high dimensional plots. Parameters used for making the viSNE plots are CD3, CD4, CD8, CD25, Foxp3, CD19, CD56, CD16, HLA-DR, CD11c, CD33, CD14. Populations were then defined based on known lineage combinations of these proteins. viSNE analyses were performed using equal sampling per comparison, perplexity = 30, theta = 0.5, iterations = 1,000-5,000. For both FC and CyTOF, 1 million cells were stained and an average of 100,000 cells were acquired. Populations were gated based on their expression of linage defining markers (e.g., CD3 for T cells, CD19 for B cells) For manual gating on biaxial plots, the positive population of each marker (e.g., CD3+GzmB+) was defined as the events above the negative population (e.g., CD3-) on the same plot for both CyTOF and FC. Statistics The equivalency between CyTOF and FC were compared using a paired TOST equivalence test. The paired TOST equivalence test reverses the null and alternative hypothesis to place the burden of proof on showing that two variables measured for the same subject are significantly equivalent (23). R package "equivalence" (version 0.7.2) was used to perform the equivalence test (24). We used an epsilon value of 5, indicating that a difference in proportion smaller than 5% is deemed equivalent. P-value ≤0.05 was considered statistically equivalent. GraphPad Prism 6 software (GraphPad Software, Inc.) was used to perform Pearson's correlation test. Comparison of CyTOF vs. Flow Cytometry Staining in Freshly Isolated Peripheral Blood Mononuclear Cells To appropriately compare staining patterns and expression of proteins of interest for cancer immunotherapy trials, we developed a 40+ parameter CyTOF panel that could identify all major (and most minor) cell lineage defining markers, in combination with transcription factors, activation/exhaustion, differentiation, and cytolytic factors (Tables 1, 2). These markers were chosen to broadly profile the differentiation and functional state of many cell types simultaneously instead of solely focusing on a single cell type (such as CD8 T cells or macrophages) as is often the case. For comparison of CyTOF to FC, the same antibody clones were used. Titrations were separately performed for CyTOF and flow cytometry antibodies to obtain the optimal concentration for use. In general, similar concentrations were optimal for both assays. Comparisons were first performed using peripheral blood mononuclear cells (PBMC) isolated from healthy individuals. The PBMC were obtained, isolated and stained by flow cytometry or CyTOF on the same day. Standard FC utilizes the forward light scatter (cell size) and side light scatter (cells internal complexity/ granularity) to identify intact cells and from debris ( Figure 1A). These same parameters are not feasible using mass cytometry, so a DNA-intercalator containing two iridium isotopes (191Ir and 193Ir) is used to detect cells by the CyTOF instrument ( Figure 1A). These reagents additionally can be used for comparison with event length to distinguish single cells, doublets and other non-cellular particles ( Figure 1A). Fluorescent reagents that are preferentially taken up by dead cells are used to distinguish live from dead cells by FC (Figure 1A). Similarly, short treatment of cells with the platinum-based reagent cisplatin is used in CyTOF to distinguish live from dead cells (25) (Figure 1A). For CyTOF, metal-containing beads (EQ Calibration beads from Fluidigm) are added to each sample to normalize signal variation (i.e., intensity of signal detected in each metal isotope "channel") resulting from instrument variability over time within each acquisition and between different samples acquired on the same day. In CyTOF, crosstalk between different mass channels can occur mainly due to potential isotopic impurities in the channels that detect other isotopes of the same element. Also, in cases of extremely high signal intensity, spillover, mainly in the mass (M) +1 and M-1 channels can occur as the instrument detectors becomes unable to separate ion peaks of adjacent channels. Another source of spillover in the M+16 channel occurs due to variable oxide formation (13). At the beginning of each analysis, any spillover was determined for each M+1, −1 and +16 channel and if observed, that channel was not used for subsequent analysis in the stain. Note, spillover was not observed in the experiments using this panel. Dimensionality reduction of the CyTOF data onto t-distributed stochastic neighbor embedding (t-SNE)based visualization (viSNE) maps were used to simultaneously resolve the many distinct immune populations (Figure 1B) in combination with the numerous phenotypic/functional markers included in the panel, something less feasible by FC due to the restrictions in parameters that can be easily included in a given stain. To compare the staining of individual proteins by FC and CyTOF, we directly measured their expression using bivariate dot plots. As shown in Figure 2, the frequency of cells expressing a given protein statistically equivalent between CyTOF and FC. We determined statistical equivalence by using the TOST equivalence test, which returns p-values below the significance threshold if the two proportions are deemed equivalent. This similarity was true whether the protein of interest was expressed on the cell surface or intracellularly (Figure 2 and Figure S1). Further, a similar staining frequency of positive staining cells was observed whether the marker was expressed at high (e.g., CD28, CD127) or low (e.g., CD25, 4-1BB) levels (Figure 2 and Figure S1). Visually, a few bivariate plots do not show the exact same staining pattern/intensity between CyTOF and FC, even though frequencies are equivalent (e.g., Helios, T-bet). We next measured the change in staining intensity of each marker by flow cytometry and CyTOF comparing the mean fluorescence intensity (MFI) and the mean metal intensity (MMI), respectively, of each protein. This was done by gating on the negative and positive staining populations for each sample using the same logarithmic scale (same high and low end) for FC and CyTOF data, and then calculating the fold change. This approach was used instead of simply stating the MFI/MMI of the positive population to account for differences in the nonspecific antibody binding, the background (autofluorescence or metal content) or due to inherent differences in the "brightness" of a given fluorochrome or metal tag. The fold change of a given protein was either the same between CyTOF and FC, or was higher by CyTOF ( Table 3). It should be noted however that CyTOF background medians are often zero or close to zero, thereby increasing the fold change values for the CyTOF data. Thus, for the staining of human PBMC for cell lineage, activation, exhaustion, differentiation, and functional proteins of interest for immune monitoring and discovery in cancer immunotherapy trials, CyTOF data provides the same quality of staining as flow cytometry. Further, the ability to combine all the markers into one stain using CyTOF provides the opportunity to simultaneously measure changes across the immune system and to identify changes without preconceived bias of what proteins a cell "should" or "should not" express. Comparison of CyTOF and Flow Cytometry in Frozen PBMC Freezing of cells can lead to changes in protein detectability, however these are generally due to cleaving or loss of surface expression as opposed to changes in the technical aspects of the assay (26-28). As a result, we next compared whether CyTOF and FC were similarly effective using previously frozen PBMC. Note, the goal of this comparison is not to determine if freezing of cells disrupts certain markers, but instead to determine whether the two cytometric techniques perform equivalently on previously frozen cells. Viably frozen PBMC from healthy donors were thawed and stained for FC and CyTOF. For these analyses, PBMC had been frozen for at least 1 month prior to thawing and staining. Analogous to fresh PBMC, the percentage of positive cell staining for each marker was similar by CyTOF and FC, despite the inter-individual variability for each marker (Figure 3). Further, similar to fresh PBMC, the staining intensity (i.e., the fold change in MFI and MMI) was similar or better using CyTOF (not shown). Thus, CyTOF is a robust approach to quantify cellular presence, phenotype and function in previously frozen PBMC. Titration of PBMC Required for CyTOF Analysis A critical issue limiting studies with small numbers of cells is the increased cell loss with staining procedures, the potential for increased "background" staining and for CyTOF in particular, the higher cell loss during acquisition. To overcome this issue, we developed a strategy in which serially diluted numbers of human PBMC were mixed with mouse splenocytes at a ratio such that the final number was always a million cells. Mouse splenocytes were used because they can be reliably distinguished from human hematopoietic cells based on expression of non-cross reactive clones of anti-mouse CD45 and anti-human CD45 antibodies ( Figure 4A). We performed two-fold dilutions of human PBMC resulting in human cells comprising 100% (1 million PBMC), 50%, 25% or 12.5% (125,000 PBMC) of the total cells in the mix. These dilutions were subsequently stained for CyTOF analysis using the panel in Table 1 and acquired. Comparison of the anti-mouse and anti-human CD45 antibody expression demonstrated the expected ratios of human PBMC based on the starting dilution and this was observed even at the lowest dilution containing only 125,000 human PBMC ( Figure 4A). Further, titration down to 125,000 human PBMC did not alter their proportions or lead to loss of the smaller populations ( Figure 4B). Of course, biologic restrictions still apply and at diminishing numbers of cells, small populations will increasingly fall below detection due to their loss in the population, similar to FC. Thus, by adjusting the cell numbers to maintain 1 million cells per stain, reliable CyTOF data can be obtained from as few as 125,000 human PBMC. This technique will be helpful in situations where cell numbers are limiting due to biologic restrictions (e.g., tumor biopsies) or multiple analyses are desired from a limited number of cells. Validating CyTOF in Tumor Tissues Many studies have used CyTOF to interrogate tumor tissues, yet a direct comparison of its validity compared to FC in human tumor samples is lacking. To validate CyTOF vs. FC in human tumor tissues, we used single cell suspensions of previously viably frozen tumors. For our analysis, we chose to compare five tumors made up of 3 types: 2 melanoma, 2 ovarian and 1 breast tumor. Initial viSNE analysis showed various amounts of interpatient variability, but in all cases major immune cell populations were resolved, including T cells, Tregs, macrophages and MDSC ( Figure 5A). Further, within these various populations, phenotypic, functional, and activation/exhaustion proteins with broad or restricted distribution could be identified, including high expression of PD1 in tumor infiltrating CD4 T cells and Tregs, with less PD1 expression observed in CD8 T cells, high level, and broad CD95 (Fas) and CD39 expression across many populations of tumor-infiltrating cells (although the latter was largely absent from CD8 T cells), and restricted expression of granzyme B, primarily by CD8 T cells (Figure 5A). Direct evaluation of CyTOF and FC staining in the different tumor types demonstrated similar proportions of immune cells in the tumor based on CD45 expression. Within the immune cell populations, staining for individual cell subsets was comparable between the two techniques ( Figure 5B and Figure S2). Importantly, comparably to flow cytometry, CyTOF identified expression of numerous activation, differentiation, and functional proteins on tumor infiltrating cells (Figure 5B and Figure S2), and did so in a single stain as opposed to FC which required many separate panels with duplicate lineage markers to attain this same level of staining (Tables 1, 2). Further, like PBMC, the fold change in MFI and MMI were similar or elevated with the CyTOF stain in the tumors (not shown). Our results show that the two technologies provide highly equivalent values across markers and populations in fresh and frozen PBMC, and tumor biopsies. Values of few populations (CD45RO+ in Frozen PBMC, and CD45+, CD3+TIGIT+, CD3+CTLA4+, and CD3+CD28+ in tumor biopsies) did not give rise to statistically equivalent results using equivalence test. However, the values of these populations from CyTOF and FC showed highly significant correlation using Pearson correlation test as shown in Table 4 (r ranges from 0.92 to 0.99, p < 0.05). We believe that the statistical inequivalence we observe in these populations is not due to differences in the two technologies. Instead, the sample size (n = 5) for both the frozen and biopsies specimens did not allow the values of these populations to reach the level of statistical significance using the TOST test for equivalence, although they correlated with high significance using the Pearson test and the fresh samples (n = 11) showed highly significant values for all populations. DISCUSSION The ability of CyTOF to combine many parameters into a single panel allows an unbiased and efficient approach for discovery of novel disease-associated cell populations or biomarkers from limited tumor samples (29). Yet, the comparability of CyTOF to the more standard use of FC of these tumor studies has not been stringently validated. Herein, we demonstrate that using our 40+ parameter panel on PBMC and tumor tissue samples, CyTOF is at least as effective, if not more so, than FC for the identification of diverse cell subsets and their subsequent FIGURE 3 | Comparison of CyTOF and flow cytometry staining of viably frozen human PBMC. Analysis was performed as in Figure 2, except using PBMC that had been previously viably frozen. Each graph represents the donor paired frequency of cells staining positive for the indicated marked by CyTOF and FC. Data represent previously frozen PBMC samples from 5 healthy donors. Significance was determined by the TOST test for equivalence. p ≤ 0.05 was considered statistically equivalent. phenotyping. To validate the use of CyTOF we developed a 40+ parameter panel analyzing diverse cell lineages in combination with a comprehensive panel of differentiation, transcription, chemotactic, activation, exhaustion, senescence and functional factors, chosen for their observed and potential relevance for monitoring and discovery in cancer clinical trials. Both CyTOF and FC had comparable efficacy to identify proportions of cell subsets in human PBMC and tumors, including multiple subsets critical to cancer control and the immunotherapeutic response; e.g., T cells, Tregs, dendritic cells, macrophages, and MDSCs. These techniques were equally efficient whether the PBMC were fresh or previously viably frozen. On these subsets, proteins associated with cell function and differentiation state were stained with the same or better fidelity by CyTOF compared to flow cytometry. Since small numbers of cells are often obtained from tumor tissues, it is important to note that reliable data could be observed by CyTOF using as few as 125,000 PBMC when they were pre-mixed with carrier mouse splenocytes prior to staining to increase overall cell numbers and minimize the loss of human cells during staining procedures and washing steps. In addition, non-immune cells (including non-hematopoietic derived tumor cells) can also be identified based on the lack of CD45 expression or by addition of other tumor-antigen specific antibodies. Importantly, CyTOF was able to simultaneously measure all these parameters whereas FC required multiple panels with significant overlap to achieve this goal. This allowed detailed high-dimensional analyses to be performed and a large number of immune cell populations to be plotted on bivariate viSNE plots for subsequent interrogation. This approach is beneficial for immune monitoring, mechanistic understanding and biomarker discovery because it provides an unbiased and broad analysis of the immune system with combinations of markers that do not rely on a priori decisions of cell attributes. Currently, most of the isotopic metals commercially available for conjugation with antibodies are from the lanthanides series. A panel of 40 antibodies can be used simultaneously without technical difficulties; alongside DNA parameters to identify cells, and viability dye to distinguish live from dead cells. Research is underway in the polymer chemistry field to develop use of metals from outside the lanthanides series, to increase the number of parameters researchers can use per panel. The cost of metal taggedantibodies, antibodies conjugation kits, and the running reagents are quite high and may be impeding the widespread of CyTOF use. Hopefully, with increasing demands, and advancement in reagents and instruments manufacturing technology, prices will be more affordable to wide range of laboratories. Designing an optimal CyTOF panel is as important as it is in flow cytometry. Although technically there is no signal interference between mass channels, isotopic impurities can cause a small amount of contamination between different channels. Therefore, the isotopic purity of the metal-tags used must be taken into consideration when assigning cellular biomarkers to each metal. Generally, less pure metals should be paired with low expression biomarkers, as this keeps the spillover at the background level in the channels where spillover is anticipated and reduces signal interference. Similar to flow cytometry, markers with low expression ideally are paired with high signal-intensity metals like 165Ho or 169Tm for better gating and resolution of the positive population. Again, like flow cytometry, to reduce signal "spillover" in CyTOF, it is good practice to try to use of markers exclusive to cell populations (e.g., CD3, CD19) on adjacent channels where "spillover" potential is highest. Antibody titration to find the optimal dilution is also equally important, as lower dilutions will result in lower resolution, while higher dilution will increase background and "spillover" on these susceptible channels. The photobleaching process of fluorescent dyes in flow cytometry makes it paramount to acquire samples within a few hours after staining. On other hand, metal-tagged samples can be run up to 2 weeks after staining without notable loss of signal and can be cryopreserved up to 1 month without affecting the data quality or staining integrity of both surface and intracellular markers (30). This is very useful in clinical trials, wherein long-term preservation allows researchers to collect samples over a period of time and acquire them simultaneously. The data analysis of CyTOF is perhaps the most challenging part of the workflow. With cytometry data in general, manual gating is the one of the main contributor to inter-laboratory variations (31). An optimally designed panel, with a wellmatched biomarkers and metals-tags as mentioned above, will cause less trouble gating and resolving positive events. So, efforts must be made to design an optimal panel for good data quality. Some laboratories use mass minus one controls (similar to fluorescence-minus one in FC) to build a hierarchy of gates and set positivity threshold, but this does not take into account the inherent background staining of each antibody and the non-specific binding (even if isotypematched control antibody is used) which leads to significant false positive signal. Further, it is impractical to prepare mass-minus one control for 40+ antibodies. However, massminus one controlling is ideal to investigate a potential spillover between channels. Fortunately, the unsupervised clustering and automated populations-detection algorithms, which accompanied the advent of CyTOF high-dimensional data, have decreased the need for manual gating (32). However, for the purpose of this article, we test the similarity between CyTOF and flow cytometry on a marker-by-marker basis and representative examples of the gating used for each marker are shown and scatter plots provided. Thus, manual gating on biaxial plots was employed with experience and knowledge of the brightness and purity of each of the metal tags used, their intrinsic background, and crosstalk, and also, familiarity with the staining pattern of each immune marker and the frequencies of cell populations. A major drawback to CyTOF is acquisition flow rate. In comparison to flow cytometry (thousands of cells/second), CyTOF has a slower flow rate (∼250-500 cells/second), resulting in longer acquisition time. Additionally, sample preparation for CyTOF requires extra caution to avoid contamination with heavy metals, which are common ingredient in laboratory detergents and other basic reagents. Further, because cells are ultimately vaporized in CyTOF, sorting out populations of interest for further analysis and downstream applications is not possible. This is something flow cytometers are able to do easily. Finally, the increase in number of parameters, made available by CyTOF technology, has intensified the complexity of data, which is a strong attribute of the technique, but also requires deeper analysis and in many cases new bioinformatics approaches to interpret and visualize the data. Importantly, neither FC nor CyTOF is the superior technique for all applications; rather, the choice must rest on the questions asked, the answers sought and the ability to analyze different data sets in meaningful ways. Thus, our data now validate CyTOF as an accurate approach to perform high dimensional analysis in human PBMC and tumor tissue for immunologic studies and cancer clinical trials. ETHICS STATEMENT All human tissues and blood were obtained through protocols approved by the Princess Margaret Cancer Center/University Health Network institutional review board. Written informed consent was obtained from all donors. All studies were performed using de-identified human data. AUTHOR CONTRIBUTIONS RG, BN, LS, and DB designed research. RG and HE performed experiments. RG, BN, and DB analyzed data. BM, RD, MG, WX, SL, HE, AR, NH, TM, BW, MB, CG, PO, LS, and DB provided intellectual input, critical discussion, and contributed technical expertise and discussion. RG and DB wrote the paper. Figure 5. Gates in each plot show the frequency of the indicated stained protein by CyTOF (left plots) or FC (right plots). Graphs display donor sample paired expression of the frequency staining positive for the indicated protein from tumor tissues by CyTOF (Cy) and FC (Fl). Significance was determined by the TOST test for equivalence. p≤0.05 was considered statistically equivalent.
2019-05-18T13:03:01.576Z
2019-05-17T00:00:00.000
{ "year": 2019, "sha1": "13b5784e62fedd4f119ca4056ede7c408a4a0f1a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2019.00415/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13b5784e62fedd4f119ca4056ede7c408a4a0f1a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253889717
pes2o/s2orc
v3-fos-license
On longitudinal moving average model for prediction of subpopulation total In the paper the empirical best linear unbiased predictor of the subpopulation total is proposed under some longitudinal model where both temporal and spatial moving average models of profile specific random components are taken into account. Two estimators of the mean square error of the predictor are proposed as well. Considerations are supported by two Monte Carlo simulation studies and the case study. Introduction In the survey sampling estimation or prediction of population characteristics is usually the key issue but subpopulations (domains) characteristics are of interest as well. What is more, in many cases we are looking for possibilities of increasing the accuracy, especially when the sample size in the domain of interest in the period of interest is small. Such domains are called small areas. In the case of the longitudinal data we can "borrow strength" from different periods and/or domains and use the information on spatial and temporal correlation. In the paper some unit-level longitudinal model is proposed which is a special case of the Linear Mixed Model (LMM) with two random components which obey assumptions of spatial moving average model and the temporal MA(1) model. Verbeke and Molenberghs (2000, p. 24) or Hedeker and Gibbons (2006, p. 115) propose a longitudinal model which is a special case of the Linear Mixed Model with profile-specific random components, where the profile is defined as a vector of random variables for a population element in different periods. Here we define the profile as a vector of random variables for observations of an element in some domain what allows to take the possibility of population changes in time into account. Hence, the profile is not element specific but element and domain specific. In mentioned books the assumptions are made only for the sampled elements while we make assumptions for all of population elements. What is more, the authors assume profiles to be independent, while here they are spatially correlated. In many papers small area predictors are derived under both area-level and unit-level models where the spatial correlation is taken into account but assuming that all data refer to single time point (Molina et al. 2009;Petrucci and Salvati 2006;Petrucci et al. 2005;Pratesi and Salvati 2008;Chandra et al. 2007). The models are special cases of the Linear Mixed Model where one of the random components obeys the assumption of the SAR(1) process between subpopulations (what means that we assume the same realization of the random component for all of the population elements which belong to the same domain). What is more, Salvati et al. (2009) propose the spatial M-quantile predictor which occurred slightly more accurate than other predictors for contaminated data in their simulation studies. If longitudinal data are studied many predictors are considered especially based on area-level models. Rao and You (1994) and Esteban et al. (2012) assume longitudinal area-level models with time effects under the assumption of the AR(1) model and independent area-level effects. In Marhuenda et al. (2013) the area-level model with AR(1) time effects and SAR(1) area effects is proposed. Singh et al. (2005) using the Kalman filtering approach propose a spatio-temporal model. Ugarte et al. (2009) study semiparametric models combining both non-parametric trends and small area random effects using P-spline regression. Saei and Chambers (2003) propose many small area methods for longitudinal data as a part of the EURAREA project. In the sections devoted to both unit-level and area-level models they consider independent area effects together with independent or autocorrelated time effects. Models with time varying area effect are studied as well. The unit-level model with spatially correlated area effects is also considered but for one period. Molina et al. (2010a) in the European Project SAMPLE propose inter alia many area and unit-level models and predictors. In the chapter 7 they study longitudinal arealevel models with time varying area effects assuming the independence of the effects between domains and the AR(1) model across time instants (independence of time varying area effects is also considered). They also propose partitioned versions of the model, where domains are divided into two groups and parameters of the distribution of the time varying area effects differ between these groups. In the chapter 8 they consider area-level time-space models which are special cases of the Linear Mixed Model with three random components, including assumptions of the AR(1) and the SAR (1) processes for random components. In the chapter 9 they consider unit-level models with independent and correlated time-effects. In one of the models they assume three random components including independent area effects and time varying area effect which obeys assumptions of the AR(1) model across time instants and independence across areas. In this paper we propose some longitudinal model and we derive empirical best linear unbiased predictor under the model together with its MSE estimators. The main differences between the proposed approach and proposals presented in other papers are as follows: -random components in our model are profile specific while in other papers area effects or time effects or time varying area affects are assumed, what means that in our case we do not assume that realizations of random components are the same within domains or within time instants or vary only between domains and time periods, -in this paper we use the spatial moving average model to describe spatial dependence instead of the first order spatial autoregressive model SAR (1), -here we use the first order temporal moving average model to describe temporal autocorrelation instead of the first order autoregressive model, -spatial dependence is assumed at the low aggregation level-between profiles instead of domains, -temporal autocorrelation is assumed at the low aggregation level-within profiles instead of within domains, -in the model changes of population and changes of domains' affiliation in time are taken into account. Basic notations Longitudinal data for periods t = 1, . . . , M are considered. In the period t the population of size N t is denoted by Ω t . The population in the period t is divided into D disjoint subpopulations (domains) Ω dt of size N dt , where d = 1, . . . , D. Let the set of population elements for which observations are available in the period t be denoted by s t and its size by n t . The set of subpopulation elements for which observations are available in the period t is denoted by s dt and its size by n dt . The d * th domain of interest in the period of interest t * will be denoted by Y id = Y id j M id ×1 will be called profile and the vector Y sid = Y id j m id ×1 will be called sample profile. Let the vector Y rid = Y id j M rid ×1 be profile for nonobserved realizations of random variables. The proposed approach may be used to predict the domain total for any (past, current and future) periods but under assumption that values of the auxiliary variables and the division of the population into subpopulations in the period of interest are known. Superpopulation model Special cases of the general or the generalized mixed linear models are widely used in different areas including for example genetics (e.g. Bernardo 1996), insurance (e.g. Wolny 2009) and statistical image analysis (e.g. Demidenko 2004, chapter 12), We consider superpopulation models used for longitudinal data (compare Verbeke and Molenberghs, 2000;Hedeker and Gibbons, 2006) which are special cases of the LMM. The following model is assumed: where where v id is a random component and v d (d = 1, 2 . . . , D) are assumed to be independent, e d = col 1≤i≤N d (e id ), where e id is a random component vector of size M id × 1 and e id (i = 1, 2, . . . , N ; d = 1, 2, . . . , D) are assumed to be independent, v d and e d are assumed to be independent. What is more, the vector of random components v d obeys assumptions of the spatial moving average process, i.e. where W d is the spatial weight matrix for profiles Moreover, elements of e id obey assumptions of MA(1) temporal process, i.e. Variance-covariance matrices of Y d (where d = 1, 2, . . . , D) are functions of unknown parameters δ = σ 2 ε σ 2 u λ (t) λ (sp) . If the population changes in time, new elements of the population or observations of the population element after the change of its domain affiliation form a new profile Y id . It means that observations of the new population element will be temporally correlated within the profile and spatially correlated with other population elements within the subpopulation. If the population element changes its domain affiliation its new observations will be temporally correlated (but temporally uncorrelated with old observations) and spatially correlated with other population elements within a new subpopulation (but spatially uncorrelated with elements of the previous subpopulation). To explain the idea of the model let us suppose that we study a population of households divided into domains according to the type of the household (what includes the criterion of the number of persons who belong to the household). Let the variable of interest be expenditures on some goods and let us consider the problem of prediction of the expenditures for the domains. Based on the model we assume that expenditures of two households of the same type (i.e. which belong to the same domain) are spatially correlated (where the distance may be measured in geographical or economic sense). Moreover, we assume that expenditures of each household are temporally autocorrelated assuming the MA(1) model. The assumption of the MA(1) model (which belongs to the class of short memory time series models) implies that non-zero covariances are assumed for lags which equal 1 (for periods t and t − 1). The assumption is more realistic than the assumption of the temporal independence and in the case of fast changes in the economy and in the economic situation of households it does not have to be treated as strong. Let us consider a situation when the type of household is changed e.g. from the household which consists of two persons (a couple) into the household which consists of three persons (a couple and a child). Hence, we assume that the temporal correlation is broken. Moreover, the household is not longer spatially correlated with households of the previous type but it becomes spatially correlated with households of the new type. Best linear unbiased predictor , Z sid is a known vector of size m id × 1 (e.g. the vector of 1s), ssid is a submatrix obtained from id by deleting rows and columns for unsampled observations. Based on the Royall (1976) theorem it is possible to derive the formula of the best linear unbiased predictor (BLUP) of the subpopulation total: wherex rd * t * is a 1 × p vector of totals of auxiliary variables in Ω rd * t * , γ rd * is a n d * i=1 M rid * × 1 vector of ones for observations in Ω rd * t * and zero otherwise. The predictor (5) is the sum of three elements. If t * is the future period then s d * t * = ∅, Ω rd * t * = Ω d * t * and the first element of (5) (given by i∈s d * t * Y id * t * ) equals zero. Hence, if the domain total of the auxiliary variable is known in the future period as well as the division of the population into subpopulations in the future period is known, then it is possible to use (5) to predict the future domain total of the variable of interest. The MSE of the BLUP given by (5) is as follows: where where Z rd = diag 1≤i≤N rd (Z rid ), Z rid is a known vector of size M rid × 1 (e.g. the vector of 1 s), rrid is a submatrix obtained from id by deleting rows and columns for sampled observations, rsid is a submatrix obtained from id by deleting rows for sampled observations and columns for unsampled observations. Empirical best linear unbiased predictor Let the unknown variance parameters in (5) be replaced by their maximum likelihood (ML) or restricted maximum likelihood (REML) estimates under normality. We obtain the two-stage predictor called EBLUP. It remains unbiased under some weak assumptions (inter alia symmetric but not necessarily normal distribution of random components for the model assumed for the whole population). The proof is presented byŻadło (2004) for the empirical version of Royall (1976) BLUP and it is based on the results presented by Kackar and Harville (1981) for the empirical version of the BLUP proposed by Henderson (1950). The problem of MSE estimation based on the Taylor expansion is considered in many papers on small area estimation but for the empirical version of BLUP proposed by Henderson (1950). The first proposal of the MSE estimator of the empirical version of the BLUP proposed by Henderson (1950) was presented by Kackar and Harville (1984) but they did not prove asymptotic unbiasedness of their MSE estimator. The landmark paper on the topic is the paper written by Prasad and Rao (1990). They assume inter alia (as in this paper) independence of random variables for elements of population from different domains and that estimators of variance components are unbiased (what is not true for ML and REML estimators). They consider three special cases of the linear mixed model: Fay and Herriot (1979) model, the nested error regression model and the random regression coefficient model. To derive the MSE estimator they use three approximations. They prove that two of them are of order o(D −1 ) for all of the three considered models. They also prove that the third approximation is of order o(D −1 ) but only for the Fay and Herriot (1979) model. Unbiasedness of estimators of variance components is not assumed by Datta and Lahiri (2000). They assume the linear mixed model with block-diagonal variance covariance matrix (as in this paper) and they prove that the bias of their MSE estimator for ML and REML estimators of variance components is of order o(D −1 ). But the proof is valid if the variance-covariance matrix is a linear combination of variance components. Das et al. (2004) consider a different asymptotic set-up. The bias of their MSE estimator is of order o In the previous paragraph the problem of the MSE estimation was considered but for the empirical version of Henderson (1950) BLUP while in this paper empirical version of the BLUP proposed by Royall (1976) is studied. Using our notation Royall (1976) derived the BLUP of domain characteristic defined as a linear combi- where γ is a known vector. Hence, the problem studied by Henderson (1950) may be treated as a special case of the problem considered by Royall (1976). The MSE estimator of the empirical version of Royall (1976) BLUP is proposed bẏ Zadło (2009). He presented proof (under some regularity conditions) that the bias of derived MSE estimator is of order o(D −1 ). The proof is a direct generalization of the results presented by Datta and Lahiri (2000) for the empirical version of Henderson (1950) BLUP. MSE estimators presented below are special cases of the estimators derived byŻadło (2009) where it assumed that the variance-covariance matrix is a linear combination of unknown variance parameters. For the proposed model (1) the assumption is not met what means that the order of approximation of MSE given by the equation (9) and the order of the bias of the MSE estimators presented below [see (10) and (11)] are not proven to be o(D −1 ). Applying results presented byŻadło (2009) under the model (1) for ML and REML estimators of δ we obtain: where g * and for ML estimators of δ by: where for the proposed model (1) g 1 (δ), g 2 (δ), g * 3 (δ) are given by (7), (8) (14) and (15) respectively. The elements of ∂g 1 (δ) ∂δ are given in the "Appendix" by (16)-(19). In the simulation study the proposed MSE estimator will be compared with deleteone-domain jackknife MSE estimator proposed by Chen and Lahiri (2002). For the proposed model (1) it is given by whereδ −d is an estimator given by the same formula asδ but based on the data without the dth domain, b d * t * (δ) = g 1 (δ) + g 2 (δ), g 1 (δ), g 2 (δ) are given by (7) and (8) is given by (5) where δ is replaced byδ −d . It is known, that parametric bootstrap distribution approximates the true distribution of the EBLUP very well-see the proof presented by Chatterjee et al. (2008). Hence, it is also possible to use the parametric bootstrap method to estimate the MSE of the EBLUP. The problem for unit-level models in small area estimation is considered inter alia by González et al. (2007), González et al. (2008). In each iteration of both jackknife and bootstrap methods we need to estimate parameters of the model (what is time-consuming). Because the number of iterations in the delete-one-domain jackknife procedure for the data considered in the Sects. 6 and 7 is several times smaller than in the bootstrap method we will use the jackknife method to estimate the MSE in the Monte Carlo simulation studies. Monte Carlo simulation study: artificial data The simulation study was conducted using R package (R Development Core Team 2013). It is based on artificial longitudinal data from M = 3 periods. The population size in each period equals N = 400 elements. The population consists of D = 20 domains (subpopulations) each of size 10 elements. The balanced panel sample is considered-in each period the same 40 elements are observed. The sample sizes in D = 20 domains are: 1 for seven domains, 2 for six domains and 3 for seven domains. Model parameters are estimated using restricted maximum likelihood method-we wrote restricted likelihood function for the model using R language and then we use constrOptim function available in stats R package to find the maximum. The number of iterations in Monte Carlo simulation study is L = 2000. In the simulation study the simulation MSE of the EBLUP is computed as d * t * ) and the simulation bias of the MSE estimator as ) are values of the EBLUP, the domain total and the MSE estimator computed in lth iteration of the simulation study. In the simulation data are generated based on the model (1) assuming arbitrary chosen parameters: different values of λ (sp) and The spatial weight matrix (denoted by W d ) is rowstandardized neighborhood matrix (each population element has two neighbors). In the simulation study three predictors are considered: -spatial BLUP (SBLUP) given by (5) where variance parameters are assumed to be known, -spatial EBLUP (SEBLUP), given by (5) Because we are mainly interested in the spatial effect in the simulation we assumed λ (t) = {−0.5, 0.5} and λ (sp) = {−0.9, −0.6, 0.6, 0.9}. In our opinion the comparison of accuracies of the SEBLUP and its simplified version (under assumption of the lack of spatio-temporal correlation of random effects and components) is crucial because the predictor is the natural alternative of the SEBLUP. What is important, the comparison measures the effect of including spatio-temporal correlation. Additional comparison between mean squared errors of SEBLUP and SBLUP is also important because it allows to measure the loss of accuracy due to the estimation of model parameters. In each figure squares denote values of some statistic for one out of D = 20 domains and the black squares denote the mean values of the statistic over D = 20 domains. Hence, we do not present only the mean values of the considered statistics but their whole distribution [as e.g. simulation results presented by Białek (2014)]. In the Fig. 1 it is shown that ratios of mean squared errors of BLUPind and SEBLUP for all of domains and different values of λ (t) and λ (sp) are from 1.004 to 1.131. It means that the maximum gain in accuracy due to the inclusion of spatio-temporal correlation is 13.1%. Because we compare the MSE of BLUPind and the MSE of SEBLUP (not SBLUP) the decrease of accuracy due to the estimation of model parameters is taken into account. What is important, the decrease of accuracy due to the estimation of model parameters presented in the Fig. 2 Fig. 2 Effect of estimation of model parameters for different values λ (t) and λ (sp) is very small-from 0.1 to 1.7 %. It means that its influence on results presented in the Fig. 1 is not large. Approximately unbiasedness of the MSE estimator (10) is not proven but the biases presented in the Fig. 3 are not high-for D = 20 domains and for different values of λ (sp) and λ (t) -from ca −8.8 % to ca 16.8 % (with mean ca 1.9 %). In the Fig. 4 biases of two MSE estimators (10) and (12) are compared for λ (t) = −0.5 and λ (sp) = −0.9 where we observed (see Fig. 3) the highest bias for the proposed MSE estimator based on the Taylor expansion. In the Fig. 3 it is shown that the jackknife estimator may give significantly better results although it is not the rule (compare with the Fig. 7 for real data). Monte Carlo simulation study: real data The second simulation study was also conducted using R package (R Development Core Team 2013) and model parameters are estimated using R as described in the previous section. The number of iterations in Monte Carlo simulation study is L = 2000. We consider real data on investments of Polish companies (in million PLN) in N = 378 regions called poviats (NUTS 4) in M = 3 years 2009-2011. We consider the balanced panel sample-in the first period a sample of size n = 38 using (arbitrarily chosen) Midzuno (1952) sampling scheme is selected and the same elements are in the sample in M = 3 periods. The population is divided into D = 28 domains according to larger regions called voivodships (NUTS 3) and types of poviats (city poviats and land poviats) within voivodships. In 7 out of D = 28 domains sample size equals 0. The spatial weight matrix is the row-standardized neighborhood matrix. The neighborhood matrix is constructed based on the 2-nearest neighbors role using auxiliary variable-the number of new companies registered in the poviat. Data are generated based on the model (1) where the values of all of the model parameters are obtained based on the whole population data using REML and assuming that We assume ∀ d β d = β because for the considered case we have no observations from some of domains in all of periods (what implies that it is not possible to estimate some of β d 's). What is important, the spatial and temporal correlations for the real data are weak: λ (t) = 0.352 and λ (sp) = −0.396. In the model-based simulation study we compare accuracies of the following predictors and estimators of the domain total in the last period: -spatial BLUP (SBLUP), given by (5), where variance parameters are assumed to be known, -spatial EBLUP (SEBLUP), given by (5), where variance parameters are replaced by REML estimates, -BLUP under the assumption that λ (sp) = 0 and λ (t) = 0 (BLUPind) which under the model and for the balanced panel sample does not depend on unknown model parameters, -Count Synthetic Estimator (C-SYN), see Rao (2003, p. 46), -Ratio Synthetic Estimator (R-SYN), see Rao (2003, p. 47), where the auxiliary variable is the number of new companies registered in the poviat in 2011, -Generalized Regression Estimator (GREG), see Rao (2003, p. 17 Y id j = x id j β + u 1,d + u 2,d j + e id j where e id j ∼ (0, σ 2 0 ), domain specific u 1,d are independent and u 1,d ∼ (0, σ 2 1 ), time-varying area effects u 2,d j for d = 1, 2, . . . , D are independent, but inside domains for j = 1, 2, . . . , M are AR(1) with parameters denoted by σ 2 2 and ρ (t) . The predictor does not take the spatial correlation into account. The temporal autocorrelation is included but on the higher aggregation level-within domains instead of within profiles as in (1). To compute values of the predictor function in R language presented in Molina et al. (2010b, pp. 123-126) is used. SEBLUP, SBLUP, BLUPind, SP use information on the variable of interest from all of the periods while C-SYN, R-SYN, GREG and GREG-L use information on the variable of interest only from the last period. GREG and GREG-L are direct estimators what means that is it possible to compute their values only for domains with sample sizes greater than zero in the period of interest (in 21 out of D = 28 domains in the simulation study). In the Fig. 5 the accuracy of SEBLUP is compared with other predictors and estimators. Estimators and predictors R-SYN, C-SYN, GREG, GREG-L and SP are several times less accurate than SEBLUP. What is interesting, in 22 out of D = 28 domains 762 T. Żadło Fig. 6 Effect of including spatio-temporal correlation (assuming that model parameters are known) and effect of estimation of model parameters SEBLUP is less accurate than BLUPind. The situation is explained in the Fig. 6 (the results for the same domains are matched by lines). The reason is that the gain in accuracy due to the including spatio-temporal correlation (assuming that model parameters are known) measured by ratios MSE(BLUPind)/MSE(SBLUP) is in 22 domains smaller than the increase of MSE due to the estimation of model parameters measured by ratios MSE(SEBLUP)/MSE(SBLUP). It explains the suggestions presented in the previous section that the comparison of SEBLUP and its simplified version (assuming the lack of spatio-temporal correlation) is very important or even crucial. In the Fig. 7 biases of two MSE estimators (10) and (12) are compared. For the studied case means of absolute biases are similar (see the right part of the Fig. 7). For the jackknife MSE estimators it equals 5.1 % while for the MSE estimator based on the Taylor expansion it equals 4.8 %. Case study: real data In the previous section we have studied the problem of prediction of total value of investments of Polish companies (in million PLN) in D = 28 regions in 2011 in the simulation study. Because we were interested in the gain in accuracy which resulted only from incorporating spatio-temporal correlation we did not use auxiliary information. In this section we will use the same data to show how to choose the appropriate model based on the real data. In this section we will use data on investments of Polish companies in 2009-2011 (the same as in the previous section) and additionally two [2008][2009][2010]. The same sample as in the previous section is studied. Firstly, we would like to find the appropriate model for the real data. Is is possible to use the likelihood ratio test to compare two models but if the models are nested (see e.g. Pinheiro and Bates 2000, pp. 83-84). Hence, at the significance level 0.05, we compare our model with two auxiliary variables with its special cases with two auxiliary variables as well but under simplified assumptions on spatio-temporal correlation (obtaining the following p values): -assuming the independence of random effects and the independence of random components (p value of likelihood ratio test: 1.1 × 10 −8 ) -assuming the independence of random effects and MA(1) random components ( p value of likelihood ratio test: 2.8 × 10 −9 ) -assuming the spatial moving average model for random effects and independence of random components (p value of likelihood ratio test: 0.0306). Hence, our model should be preferred comparing with its special cases. Pinheiro and Bates (2000) in chapter 5 suggest using e.g Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC) if we would like to compare non nested models. Moreover, the authors present different models available in R which will be compared in this section with the proposed model (1). It is possible to include other models as well but in this case the computations must be conducted using original functions (as in the case of the proposed model). Pinheiro and Bates (2000) in chapter 5 present special cases of the linear mixed models where different assumptions on correlation structure of random components can be made but assuming the independence of random components within groups defined by the grouping variable used for the random effects. Hence, if we assume profile specific random effects we can define different temporal models for random components within profiles, and if we define time specific random effects we can define different spatial models for random components within domains. Below we use different correlation structures described by Pinheiro and Bates (2000) in chapter 5 including different spatial correlation structures defined in Pinheiro and Bates (2000, p. 232). In the Table 1 we present the values of the AIC and BIC criteria of the proposed model and other non nested models: -with independent profile specific random effects and MA(2) random components (model_i_MA2) -with independent profile specific random effects and AR(1) random components (model_i_AR1) -with independent profile specific random effects and AR(2) random components (model_i_AR2) -with independent profile specific random effects and ARMA(1,1) random components (model_i_ARMA) -with independent profile specific random effects and compound symmetry temporal correlation of random components (model_i_compound_symmetry) -with independent time specific random effects and independent random components (model_t) -with independent time specific random effects and exponential spatial correlation of random components (model_t_exponential) -with independent time specific random effects and gaussian spatial correlation of random components (model_t_gaussian) -with independent time specific random effects and linear spatial correlation of random components (model_t_linear) -with independent time specific random effects and rational quadratic spatial correlation of random components (model_t_rational_quadratic) -with independent time specific random effects and spherical spatial correlation of random components (model_t_spherical) -with independent time specific random effects and compound symmetry spatial correlation of random components (model_t_compound_symmetry) -with independent domain specific random effects and independent random components (model_d) The proposed model has the smallest values of AIC and BIC criteria comparing with other analyzed models. It is worth noting that the values of the criteria for some models are the same what is not unusual-see eg. Pinheiro and Bates (2000, p. 249) where 4 out 5 models with different spatial correlation structures have the same values of AIC and BIC criteria. We have also compared our model with models with the same variance-covariance matrices as the models presented in the Table 1 but using only one out of two auxiliary variables. these models have also higher values of AIC and BIC criteria than the proposed model. Although the assumed model with only one out of two auxiliary variables has higher values of AIC and BIC criteria the formal test of significance of fixed effects will be conducted as well. In the section we will use permutation tests of fixed effects. The algorithm for testing the jth fixed effect is as follows (Pesarin and Salmaso 2010, p. 45): 1. Based on the original data a test statistic, denoted by T 0 = T (X), is computed, e.g. the test statistic can be defined as log-likelihood (as in this paper). 2. We take a random permutation of jth column of the matrix X and we obtain a new matrix of auxiliary variables denoted by X * . 3. Value of the test statistics T * = T (X * ) is computed. 4. Steps 2 and 3 are repeated B times and B values of T * b = T (X * b ) are computed, where b = 1, 2, . . . , B. 5. We estimate p value as B −1 1≤b≤B I (T * b ≥ T 0 )-he fraction of the permutation values not smaller than the the value of the test statistic computed based on the real data. If is not possible to make computations for all possible permutations, the estimated p value strongly converges to its respective true value (Pesarin and Salmaso 2010, p. 45). In the case study the number of all possible permutations is (n × M)! = (38 × 3)! ≈ 2.5 × 10 186 . Hence, p-values will be computed based on B = 1000 independent permutations. Let us consider tests of fixed effects for two auxiliary variables (production sold and fixed assets). In both cases p-values of permutation test equal 0, what means that the variables have a significant influence on the variable of interest. Finally, in the Fig. 8 we present real values of domain totals of investments and the predicted values-values of the empirical version of the proposed predictor given by (5) based on the sample data considered in the section. It should be noted that the 766 T.Żadło -zero for 7 out of D = 28 domains, -one for 11 out of D = 28 domains, -two for 5 out of D = 28 domains, -three for 3 out of D = 28 domains, -four for 2 out of D = 28 domains. Conclusions In the paper some special case of the LMM for longitudinal data is proposed. The BLUP of the subpopulation total for the model is derived and MSE estimators of its empirical version are proposed. The accuracy of the proposed predictor and biases of the proposed MSE estimators are analyzed in two Monte Carlo simulation studies based on the artificial and the real data. In the first simulation study based on the artificial data the accuracy of the empirical version of the proposed predictor was better for all of the domains comparing with the predictor derived under the assumption of lack of spatio-temporal correlation. In the second simulation study based on the real data the empirical version of the proposed predictor was even several times more accurate than other predictors and estimators but it was better than the predictor derived under the assumption of lack of spatio-temporal correlation only in 6 out of 28 domains. It resulted from the decrease of the accuracy due to the estimation of model parameters. In both simulation studies biases of the proposed MSE estimator were small. The considerations are also supported by the case study. ∂ × σ 2 u Z rd * H d * Z T sd * + diag 1≤i≤N rd * ( rsid * ) T γ rd * + γ T rd * σ 2 u Z rd * H d * Z T sd * + diag 1≤i≤N rd * ( rsid * )
2022-11-26T14:13:24.071Z
2014-06-24T00:00:00.000
{ "year": 2014, "sha1": "edbccfd39e5a60375878edb6db3857b40ff3e211", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00362-014-0607-5.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "edbccfd39e5a60375878edb6db3857b40ff3e211", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }